00:00:00.001 Started by upstream project "autotest-per-patch" build number 127211 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.094 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.129 The recommended git tool is: git 00:00:00.129 using credential 00000000-0000-0000-0000-000000000002 00:00:00.132 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.168 Fetching changes from the remote Git repository 00:00:00.170 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.213 Using shallow fetch with depth 1 00:00:00.214 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.214 > git --version # timeout=10 00:00:00.243 > git --version # 'git version 2.39.2' 00:00:00.243 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.267 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.267 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:05.564 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:05.576 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:05.589 Checking out Revision 4313f32deecbb7108199ebd1913b403a3005dece (FETCH_HEAD) 00:00:05.589 > git config core.sparsecheckout # timeout=10 00:00:05.601 > git read-tree -mu HEAD # timeout=10 00:00:05.617 > git checkout -f 4313f32deecbb7108199ebd1913b403a3005dece # timeout=5 00:00:05.643 Commit message: "packer: Add bios builder" 00:00:05.644 > git rev-list --no-walk 4313f32deecbb7108199ebd1913b403a3005dece # timeout=10 00:00:05.745 [Pipeline] Start of Pipeline 00:00:05.759 [Pipeline] library 00:00:05.761 Loading library shm_lib@master 00:00:05.761 Library shm_lib@master is cached. Copying from home. 00:00:05.783 [Pipeline] node 00:00:05.793 Running on GP11 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:05.794 [Pipeline] { 00:00:05.803 [Pipeline] catchError 00:00:05.804 [Pipeline] { 00:00:05.816 [Pipeline] wrap 00:00:05.824 [Pipeline] { 00:00:05.830 [Pipeline] stage 00:00:05.831 [Pipeline] { (Prologue) 00:00:06.015 [Pipeline] sh 00:00:06.301 + logger -p user.info -t JENKINS-CI 00:00:06.320 [Pipeline] echo 00:00:06.322 Node: GP11 00:00:06.329 [Pipeline] sh 00:00:06.630 [Pipeline] setCustomBuildProperty 00:00:06.643 [Pipeline] echo 00:00:06.644 Cleanup processes 00:00:06.648 [Pipeline] sh 00:00:06.926 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.926 2664785 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:06.938 [Pipeline] sh 00:00:07.214 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.214 ++ awk '{print $1}' 00:00:07.214 ++ grep -v 'sudo pgrep' 00:00:07.214 + sudo kill -9 00:00:07.214 + true 00:00:07.227 [Pipeline] cleanWs 00:00:07.236 [WS-CLEANUP] Deleting project workspace... 00:00:07.236 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.243 [WS-CLEANUP] done 00:00:07.247 [Pipeline] setCustomBuildProperty 00:00:07.261 [Pipeline] sh 00:00:07.541 + sudo git config --global --replace-all safe.directory '*' 00:00:07.634 [Pipeline] httpRequest 00:00:07.667 [Pipeline] echo 00:00:07.669 Sorcerer 10.211.164.101 is alive 00:00:07.677 [Pipeline] httpRequest 00:00:07.681 HttpMethod: GET 00:00:07.681 URL: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.681 Sending request to url: http://10.211.164.101/packages/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:07.698 Response Code: HTTP/1.1 200 OK 00:00:07.699 Success: Status code 200 is in the accepted range: 200,404 00:00:07.699 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:15.153 [Pipeline] sh 00:00:15.450 + tar --no-same-owner -xf jbp_4313f32deecbb7108199ebd1913b403a3005dece.tar.gz 00:00:15.461 [Pipeline] httpRequest 00:00:15.496 [Pipeline] echo 00:00:15.498 Sorcerer 10.211.164.101 is alive 00:00:15.505 [Pipeline] httpRequest 00:00:15.509 HttpMethod: GET 00:00:15.509 URL: http://10.211.164.101/packages/spdk_fb47d95177b47edf1fb7d3deb3b8475bd4301eec.tar.gz 00:00:15.510 Sending request to url: http://10.211.164.101/packages/spdk_fb47d95177b47edf1fb7d3deb3b8475bd4301eec.tar.gz 00:00:15.535 Response Code: HTTP/1.1 200 OK 00:00:15.535 Success: Status code 200 is in the accepted range: 200,404 00:00:15.536 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_fb47d95177b47edf1fb7d3deb3b8475bd4301eec.tar.gz 00:01:07.245 [Pipeline] sh 00:01:07.534 + tar --no-same-owner -xf spdk_fb47d95177b47edf1fb7d3deb3b8475bd4301eec.tar.gz 00:01:10.081 [Pipeline] sh 00:01:10.369 + git -C spdk log --oneline -n5 00:01:10.369 fb47d9517 bdev/raid: recalculate `data_offset` when a base_bdev is configured 00:01:10.369 704257090 lib/reduce: fix the incorrect calculation method for the number of io_unit required for metadata. 00:01:10.369 fc2398dfa raid: clear base bdev configure_cb after executing 00:01:10.369 5558f3f50 raid: complete bdev_raid_create after sb is written 00:01:10.369 d005e023b raid: fix empty slot not updated in sb after resize 00:01:10.382 [Pipeline] } 00:01:10.395 [Pipeline] // stage 00:01:10.405 [Pipeline] stage 00:01:10.407 [Pipeline] { (Prepare) 00:01:10.422 [Pipeline] writeFile 00:01:10.435 [Pipeline] sh 00:01:10.715 + logger -p user.info -t JENKINS-CI 00:01:10.727 [Pipeline] sh 00:01:11.013 + logger -p user.info -t JENKINS-CI 00:01:11.027 [Pipeline] sh 00:01:11.325 + cat autorun-spdk.conf 00:01:11.325 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.325 SPDK_TEST_NVMF=1 00:01:11.325 SPDK_TEST_NVME_CLI=1 00:01:11.325 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.325 SPDK_TEST_NVMF_NICS=e810 00:01:11.325 SPDK_TEST_VFIOUSER=1 00:01:11.325 SPDK_RUN_UBSAN=1 00:01:11.325 NET_TYPE=phy 00:01:11.333 RUN_NIGHTLY=0 00:01:11.337 [Pipeline] readFile 00:01:11.358 [Pipeline] withEnv 00:01:11.359 [Pipeline] { 00:01:11.370 [Pipeline] sh 00:01:11.652 + set -ex 00:01:11.653 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:11.653 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:11.653 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:11.653 ++ SPDK_TEST_NVMF=1 00:01:11.653 ++ SPDK_TEST_NVME_CLI=1 00:01:11.653 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:11.653 ++ SPDK_TEST_NVMF_NICS=e810 00:01:11.653 ++ SPDK_TEST_VFIOUSER=1 00:01:11.653 ++ SPDK_RUN_UBSAN=1 00:01:11.653 ++ NET_TYPE=phy 00:01:11.653 ++ RUN_NIGHTLY=0 00:01:11.653 + case $SPDK_TEST_NVMF_NICS in 00:01:11.653 + DRIVERS=ice 00:01:11.653 + [[ tcp == \r\d\m\a ]] 00:01:11.653 + [[ -n ice ]] 00:01:11.653 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:11.653 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:11.653 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:11.653 rmmod: ERROR: Module irdma is not currently loaded 00:01:11.653 rmmod: ERROR: Module i40iw is not currently loaded 00:01:11.653 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:11.653 + true 00:01:11.653 + for D in $DRIVERS 00:01:11.653 + sudo modprobe ice 00:01:11.653 + exit 0 00:01:11.663 [Pipeline] } 00:01:11.675 [Pipeline] // withEnv 00:01:11.680 [Pipeline] } 00:01:11.691 [Pipeline] // stage 00:01:11.701 [Pipeline] catchError 00:01:11.703 [Pipeline] { 00:01:11.719 [Pipeline] timeout 00:01:11.719 Timeout set to expire in 50 min 00:01:11.721 [Pipeline] { 00:01:11.736 [Pipeline] stage 00:01:11.738 [Pipeline] { (Tests) 00:01:11.750 [Pipeline] sh 00:01:12.036 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.036 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.036 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.036 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:12.036 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.036 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.036 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:12.036 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.036 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:12.036 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:12.036 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:12.036 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:12.036 + source /etc/os-release 00:01:12.036 ++ NAME='Fedora Linux' 00:01:12.036 ++ VERSION='38 (Cloud Edition)' 00:01:12.036 ++ ID=fedora 00:01:12.036 ++ VERSION_ID=38 00:01:12.036 ++ VERSION_CODENAME= 00:01:12.036 ++ PLATFORM_ID=platform:f38 00:01:12.036 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:12.036 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:12.036 ++ LOGO=fedora-logo-icon 00:01:12.036 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:12.036 ++ HOME_URL=https://fedoraproject.org/ 00:01:12.036 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:12.036 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:12.036 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:12.036 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:12.037 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:12.037 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:12.037 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:12.037 ++ SUPPORT_END=2024-05-14 00:01:12.037 ++ VARIANT='Cloud Edition' 00:01:12.037 ++ VARIANT_ID=cloud 00:01:12.037 + uname -a 00:01:12.037 Linux spdk-gp-11 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:12.037 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:12.974 Hugepages 00:01:12.974 node hugesize free / total 00:01:12.974 node0 1048576kB 0 / 0 00:01:12.974 node0 2048kB 0 / 0 00:01:12.974 node1 1048576kB 0 / 0 00:01:12.974 node1 2048kB 0 / 0 00:01:12.974 00:01:12.974 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:12.974 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:01:12.974 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:01:12.974 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:01:12.974 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:01:12.974 + rm -f /tmp/spdk-ld-path 00:01:12.974 + source autorun-spdk.conf 00:01:12.975 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:12.975 ++ SPDK_TEST_NVMF=1 00:01:12.975 ++ SPDK_TEST_NVME_CLI=1 00:01:12.975 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:12.975 ++ SPDK_TEST_NVMF_NICS=e810 00:01:12.975 ++ SPDK_TEST_VFIOUSER=1 00:01:12.975 ++ SPDK_RUN_UBSAN=1 00:01:12.975 ++ NET_TYPE=phy 00:01:12.975 ++ RUN_NIGHTLY=0 00:01:12.975 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:12.975 + [[ -n '' ]] 00:01:12.975 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:12.975 + for M in /var/spdk/build-*-manifest.txt 00:01:12.975 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:12.975 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.975 + for M in /var/spdk/build-*-manifest.txt 00:01:12.975 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:12.975 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:12.975 ++ uname 00:01:12.975 + [[ Linux == \L\i\n\u\x ]] 00:01:12.975 + sudo dmesg -T 00:01:13.233 + sudo dmesg --clear 00:01:13.233 + dmesg_pid=2665460 00:01:13.233 + [[ Fedora Linux == FreeBSD ]] 00:01:13.233 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.233 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:13.233 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:13.233 + sudo dmesg -Tw 00:01:13.233 + [[ -x /usr/src/fio-static/fio ]] 00:01:13.233 + export FIO_BIN=/usr/src/fio-static/fio 00:01:13.233 + FIO_BIN=/usr/src/fio-static/fio 00:01:13.233 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:13.233 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:13.233 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:13.233 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.233 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:13.233 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:13.233 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.234 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:13.234 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:13.234 Test configuration: 00:01:13.234 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:13.234 SPDK_TEST_NVMF=1 00:01:13.234 SPDK_TEST_NVME_CLI=1 00:01:13.234 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:13.234 SPDK_TEST_NVMF_NICS=e810 00:01:13.234 SPDK_TEST_VFIOUSER=1 00:01:13.234 SPDK_RUN_UBSAN=1 00:01:13.234 NET_TYPE=phy 00:01:13.234 RUN_NIGHTLY=0 12:02:06 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:13.234 12:02:06 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:13.234 12:02:06 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:13.234 12:02:06 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:13.234 12:02:06 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.234 12:02:06 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.234 12:02:06 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.234 12:02:06 -- paths/export.sh@5 -- $ export PATH 00:01:13.234 12:02:06 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:13.234 12:02:06 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:13.234 12:02:06 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:13.234 12:02:06 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721988126.XXXXXX 00:01:13.234 12:02:06 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721988126.WJak65 00:01:13.234 12:02:06 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:13.234 12:02:06 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:13.234 12:02:06 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:13.234 12:02:06 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:13.234 12:02:06 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:13.234 12:02:06 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:13.234 12:02:06 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:01:13.234 12:02:06 -- common/autotest_common.sh@10 -- $ set +x 00:01:13.234 12:02:06 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:13.234 12:02:06 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:13.234 12:02:06 -- pm/common@17 -- $ local monitor 00:01:13.234 12:02:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.234 12:02:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.234 12:02:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.234 12:02:06 -- pm/common@21 -- $ date +%s 00:01:13.234 12:02:06 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:13.234 12:02:06 -- pm/common@21 -- $ date +%s 00:01:13.234 12:02:06 -- pm/common@25 -- $ sleep 1 00:01:13.234 12:02:06 -- pm/common@21 -- $ date +%s 00:01:13.234 12:02:06 -- pm/common@21 -- $ date +%s 00:01:13.234 12:02:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721988126 00:01:13.234 12:02:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721988126 00:01:13.234 12:02:06 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721988126 00:01:13.234 12:02:06 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721988126 00:01:13.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721988126_collect-vmstat.pm.log 00:01:13.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721988126_collect-cpu-load.pm.log 00:01:13.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721988126_collect-cpu-temp.pm.log 00:01:13.234 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721988126_collect-bmc-pm.bmc.pm.log 00:01:14.197 12:02:07 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:14.197 12:02:07 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:14.197 12:02:07 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:14.197 12:02:07 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:14.197 12:02:07 -- spdk/autobuild.sh@16 -- $ date -u 00:01:14.197 Fri Jul 26 10:02:07 AM UTC 2024 00:01:14.197 12:02:07 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:14.197 v24.09-pre-322-gfb47d9517 00:01:14.197 12:02:07 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:14.197 12:02:07 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:14.197 12:02:07 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:14.197 12:02:07 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:14.197 12:02:07 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:14.197 12:02:07 -- common/autotest_common.sh@10 -- $ set +x 00:01:14.197 ************************************ 00:01:14.197 START TEST ubsan 00:01:14.197 ************************************ 00:01:14.197 12:02:07 ubsan -- common/autotest_common.sh@1125 -- $ echo 'using ubsan' 00:01:14.197 using ubsan 00:01:14.197 00:01:14.197 real 0m0.000s 00:01:14.197 user 0m0.000s 00:01:14.197 sys 0m0.000s 00:01:14.197 12:02:07 ubsan -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:01:14.197 12:02:07 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:14.197 ************************************ 00:01:14.197 END TEST ubsan 00:01:14.197 ************************************ 00:01:14.197 12:02:07 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:14.197 12:02:07 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:14.197 12:02:07 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:14.197 12:02:07 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:14.197 12:02:07 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:14.197 12:02:07 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:14.197 12:02:07 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:14.197 12:02:07 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:14.197 12:02:07 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:14.456 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:14.456 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:14.714 Using 'verbs' RDMA provider 00:01:25.264 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:35.248 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:35.248 Creating mk/config.mk...done. 00:01:35.248 Creating mk/cc.flags.mk...done. 00:01:35.248 Type 'make' to build. 00:01:35.248 12:02:27 -- spdk/autobuild.sh@69 -- $ run_test make make -j48 00:01:35.248 12:02:27 -- common/autotest_common.sh@1101 -- $ '[' 3 -le 1 ']' 00:01:35.248 12:02:27 -- common/autotest_common.sh@1107 -- $ xtrace_disable 00:01:35.248 12:02:27 -- common/autotest_common.sh@10 -- $ set +x 00:01:35.248 ************************************ 00:01:35.248 START TEST make 00:01:35.248 ************************************ 00:01:35.248 12:02:27 make -- common/autotest_common.sh@1125 -- $ make -j48 00:01:35.248 make[1]: Nothing to be done for 'all'. 00:01:36.205 The Meson build system 00:01:36.205 Version: 1.3.1 00:01:36.205 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:36.205 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:36.205 Build type: native build 00:01:36.205 Project name: libvfio-user 00:01:36.205 Project version: 0.0.1 00:01:36.205 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:36.205 C linker for the host machine: cc ld.bfd 2.39-16 00:01:36.205 Host machine cpu family: x86_64 00:01:36.205 Host machine cpu: x86_64 00:01:36.205 Run-time dependency threads found: YES 00:01:36.205 Library dl found: YES 00:01:36.205 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:36.205 Run-time dependency json-c found: YES 0.17 00:01:36.205 Run-time dependency cmocka found: YES 1.1.7 00:01:36.205 Program pytest-3 found: NO 00:01:36.205 Program flake8 found: NO 00:01:36.205 Program misspell-fixer found: NO 00:01:36.205 Program restructuredtext-lint found: NO 00:01:36.205 Program valgrind found: YES (/usr/bin/valgrind) 00:01:36.205 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:36.205 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:36.205 Compiler for C supports arguments -Wwrite-strings: YES 00:01:36.205 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.205 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:36.205 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:36.205 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:36.205 Build targets in project: 8 00:01:36.205 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:36.205 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:36.205 00:01:36.205 libvfio-user 0.0.1 00:01:36.205 00:01:36.205 User defined options 00:01:36.205 buildtype : debug 00:01:36.205 default_library: shared 00:01:36.205 libdir : /usr/local/lib 00:01:36.205 00:01:36.205 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:36.778 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:37.039 [1/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:37.039 [2/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:37.039 [3/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:37.039 [4/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:37.039 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:37.039 [6/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:37.039 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:37.039 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:37.039 [9/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:37.039 [10/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:37.039 [11/37] Compiling C object samples/server.p/server.c.o 00:01:37.039 [12/37] Compiling C object samples/null.p/null.c.o 00:01:37.303 [13/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:37.303 [14/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:37.303 [15/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:37.303 [16/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:37.303 [17/37] Compiling C object samples/client.p/client.c.o 00:01:37.303 [18/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:37.303 [19/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:37.303 [20/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:37.303 [21/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:37.303 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:37.303 [23/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:37.303 [24/37] Linking target samples/client 00:01:37.303 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:37.303 [26/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:37.303 [27/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:37.303 [28/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:37.562 [29/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:37.562 [30/37] Linking target lib/libvfio-user.so.0.0.1 00:01:37.562 [31/37] Linking target test/unit_tests 00:01:37.562 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:37.857 [33/37] Linking target samples/gpio-pci-idio-16 00:01:37.857 [34/37] Linking target samples/server 00:01:37.857 [35/37] Linking target samples/null 00:01:37.857 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:37.857 [37/37] Linking target samples/lspci 00:01:37.857 INFO: autodetecting backend as ninja 00:01:37.857 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:37.857 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:38.460 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:38.460 ninja: no work to do. 00:01:43.734 The Meson build system 00:01:43.734 Version: 1.3.1 00:01:43.734 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:43.734 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:43.734 Build type: native build 00:01:43.734 Program cat found: YES (/usr/bin/cat) 00:01:43.734 Project name: DPDK 00:01:43.734 Project version: 24.03.0 00:01:43.734 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:43.734 C linker for the host machine: cc ld.bfd 2.39-16 00:01:43.734 Host machine cpu family: x86_64 00:01:43.734 Host machine cpu: x86_64 00:01:43.734 Message: ## Building in Developer Mode ## 00:01:43.734 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:43.734 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:43.734 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:43.734 Program python3 found: YES (/usr/bin/python3) 00:01:43.734 Program cat found: YES (/usr/bin/cat) 00:01:43.734 Compiler for C supports arguments -march=native: YES 00:01:43.734 Checking for size of "void *" : 8 00:01:43.734 Checking for size of "void *" : 8 (cached) 00:01:43.734 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:43.734 Library m found: YES 00:01:43.734 Library numa found: YES 00:01:43.734 Has header "numaif.h" : YES 00:01:43.734 Library fdt found: NO 00:01:43.734 Library execinfo found: NO 00:01:43.734 Has header "execinfo.h" : YES 00:01:43.734 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:43.734 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:43.734 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:43.734 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:43.734 Run-time dependency openssl found: YES 3.0.9 00:01:43.734 Run-time dependency libpcap found: YES 1.10.4 00:01:43.734 Has header "pcap.h" with dependency libpcap: YES 00:01:43.734 Compiler for C supports arguments -Wcast-qual: YES 00:01:43.734 Compiler for C supports arguments -Wdeprecated: YES 00:01:43.734 Compiler for C supports arguments -Wformat: YES 00:01:43.734 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:43.734 Compiler for C supports arguments -Wformat-security: NO 00:01:43.734 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:43.734 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:43.734 Compiler for C supports arguments -Wnested-externs: YES 00:01:43.734 Compiler for C supports arguments -Wold-style-definition: YES 00:01:43.734 Compiler for C supports arguments -Wpointer-arith: YES 00:01:43.734 Compiler for C supports arguments -Wsign-compare: YES 00:01:43.734 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:43.734 Compiler for C supports arguments -Wundef: YES 00:01:43.734 Compiler for C supports arguments -Wwrite-strings: YES 00:01:43.734 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:43.734 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:43.734 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:43.734 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:43.734 Program objdump found: YES (/usr/bin/objdump) 00:01:43.734 Compiler for C supports arguments -mavx512f: YES 00:01:43.734 Checking if "AVX512 checking" compiles: YES 00:01:43.734 Fetching value of define "__SSE4_2__" : 1 00:01:43.734 Fetching value of define "__AES__" : 1 00:01:43.734 Fetching value of define "__AVX__" : 1 00:01:43.734 Fetching value of define "__AVX2__" : (undefined) 00:01:43.734 Fetching value of define "__AVX512BW__" : (undefined) 00:01:43.734 Fetching value of define "__AVX512CD__" : (undefined) 00:01:43.734 Fetching value of define "__AVX512DQ__" : (undefined) 00:01:43.734 Fetching value of define "__AVX512F__" : (undefined) 00:01:43.734 Fetching value of define "__AVX512VL__" : (undefined) 00:01:43.734 Fetching value of define "__PCLMUL__" : 1 00:01:43.734 Fetching value of define "__RDRND__" : 1 00:01:43.734 Fetching value of define "__RDSEED__" : (undefined) 00:01:43.734 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:43.734 Fetching value of define "__znver1__" : (undefined) 00:01:43.734 Fetching value of define "__znver2__" : (undefined) 00:01:43.734 Fetching value of define "__znver3__" : (undefined) 00:01:43.734 Fetching value of define "__znver4__" : (undefined) 00:01:43.734 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:43.734 Message: lib/log: Defining dependency "log" 00:01:43.734 Message: lib/kvargs: Defining dependency "kvargs" 00:01:43.734 Message: lib/telemetry: Defining dependency "telemetry" 00:01:43.734 Checking for function "getentropy" : NO 00:01:43.734 Message: lib/eal: Defining dependency "eal" 00:01:43.734 Message: lib/ring: Defining dependency "ring" 00:01:43.734 Message: lib/rcu: Defining dependency "rcu" 00:01:43.734 Message: lib/mempool: Defining dependency "mempool" 00:01:43.734 Message: lib/mbuf: Defining dependency "mbuf" 00:01:43.734 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:43.734 Fetching value of define "__AVX512F__" : (undefined) (cached) 00:01:43.734 Compiler for C supports arguments -mpclmul: YES 00:01:43.734 Compiler for C supports arguments -maes: YES 00:01:43.734 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:43.734 Compiler for C supports arguments -mavx512bw: YES 00:01:43.734 Compiler for C supports arguments -mavx512dq: YES 00:01:43.734 Compiler for C supports arguments -mavx512vl: YES 00:01:43.734 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:43.734 Compiler for C supports arguments -mavx2: YES 00:01:43.734 Compiler for C supports arguments -mavx: YES 00:01:43.734 Message: lib/net: Defining dependency "net" 00:01:43.734 Message: lib/meter: Defining dependency "meter" 00:01:43.734 Message: lib/ethdev: Defining dependency "ethdev" 00:01:43.734 Message: lib/pci: Defining dependency "pci" 00:01:43.734 Message: lib/cmdline: Defining dependency "cmdline" 00:01:43.734 Message: lib/hash: Defining dependency "hash" 00:01:43.734 Message: lib/timer: Defining dependency "timer" 00:01:43.734 Message: lib/compressdev: Defining dependency "compressdev" 00:01:43.734 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:43.734 Message: lib/dmadev: Defining dependency "dmadev" 00:01:43.734 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:43.734 Message: lib/power: Defining dependency "power" 00:01:43.734 Message: lib/reorder: Defining dependency "reorder" 00:01:43.734 Message: lib/security: Defining dependency "security" 00:01:43.734 Has header "linux/userfaultfd.h" : YES 00:01:43.734 Has header "linux/vduse.h" : YES 00:01:43.734 Message: lib/vhost: Defining dependency "vhost" 00:01:43.734 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:43.734 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:43.734 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:43.734 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:43.734 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:43.734 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:43.734 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:43.734 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:43.734 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:43.734 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:43.734 Program doxygen found: YES (/usr/bin/doxygen) 00:01:43.734 Configuring doxy-api-html.conf using configuration 00:01:43.734 Configuring doxy-api-man.conf using configuration 00:01:43.734 Program mandb found: YES (/usr/bin/mandb) 00:01:43.734 Program sphinx-build found: NO 00:01:43.734 Configuring rte_build_config.h using configuration 00:01:43.734 Message: 00:01:43.734 ================= 00:01:43.734 Applications Enabled 00:01:43.735 ================= 00:01:43.735 00:01:43.735 apps: 00:01:43.735 00:01:43.735 00:01:43.735 Message: 00:01:43.735 ================= 00:01:43.735 Libraries Enabled 00:01:43.735 ================= 00:01:43.735 00:01:43.735 libs: 00:01:43.735 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:43.735 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:43.735 cryptodev, dmadev, power, reorder, security, vhost, 00:01:43.735 00:01:43.735 Message: 00:01:43.735 =============== 00:01:43.735 Drivers Enabled 00:01:43.735 =============== 00:01:43.735 00:01:43.735 common: 00:01:43.735 00:01:43.735 bus: 00:01:43.735 pci, vdev, 00:01:43.735 mempool: 00:01:43.735 ring, 00:01:43.735 dma: 00:01:43.735 00:01:43.735 net: 00:01:43.735 00:01:43.735 crypto: 00:01:43.735 00:01:43.735 compress: 00:01:43.735 00:01:43.735 vdpa: 00:01:43.735 00:01:43.735 00:01:43.735 Message: 00:01:43.735 ================= 00:01:43.735 Content Skipped 00:01:43.735 ================= 00:01:43.735 00:01:43.735 apps: 00:01:43.735 dumpcap: explicitly disabled via build config 00:01:43.735 graph: explicitly disabled via build config 00:01:43.735 pdump: explicitly disabled via build config 00:01:43.735 proc-info: explicitly disabled via build config 00:01:43.735 test-acl: explicitly disabled via build config 00:01:43.735 test-bbdev: explicitly disabled via build config 00:01:43.735 test-cmdline: explicitly disabled via build config 00:01:43.735 test-compress-perf: explicitly disabled via build config 00:01:43.735 test-crypto-perf: explicitly disabled via build config 00:01:43.735 test-dma-perf: explicitly disabled via build config 00:01:43.735 test-eventdev: explicitly disabled via build config 00:01:43.735 test-fib: explicitly disabled via build config 00:01:43.735 test-flow-perf: explicitly disabled via build config 00:01:43.735 test-gpudev: explicitly disabled via build config 00:01:43.735 test-mldev: explicitly disabled via build config 00:01:43.735 test-pipeline: explicitly disabled via build config 00:01:43.735 test-pmd: explicitly disabled via build config 00:01:43.735 test-regex: explicitly disabled via build config 00:01:43.735 test-sad: explicitly disabled via build config 00:01:43.735 test-security-perf: explicitly disabled via build config 00:01:43.735 00:01:43.735 libs: 00:01:43.735 argparse: explicitly disabled via build config 00:01:43.735 metrics: explicitly disabled via build config 00:01:43.735 acl: explicitly disabled via build config 00:01:43.735 bbdev: explicitly disabled via build config 00:01:43.735 bitratestats: explicitly disabled via build config 00:01:43.735 bpf: explicitly disabled via build config 00:01:43.735 cfgfile: explicitly disabled via build config 00:01:43.735 distributor: explicitly disabled via build config 00:01:43.735 efd: explicitly disabled via build config 00:01:43.735 eventdev: explicitly disabled via build config 00:01:43.735 dispatcher: explicitly disabled via build config 00:01:43.735 gpudev: explicitly disabled via build config 00:01:43.735 gro: explicitly disabled via build config 00:01:43.735 gso: explicitly disabled via build config 00:01:43.735 ip_frag: explicitly disabled via build config 00:01:43.735 jobstats: explicitly disabled via build config 00:01:43.735 latencystats: explicitly disabled via build config 00:01:43.735 lpm: explicitly disabled via build config 00:01:43.735 member: explicitly disabled via build config 00:01:43.735 pcapng: explicitly disabled via build config 00:01:43.735 rawdev: explicitly disabled via build config 00:01:43.735 regexdev: explicitly disabled via build config 00:01:43.735 mldev: explicitly disabled via build config 00:01:43.735 rib: explicitly disabled via build config 00:01:43.735 sched: explicitly disabled via build config 00:01:43.735 stack: explicitly disabled via build config 00:01:43.735 ipsec: explicitly disabled via build config 00:01:43.735 pdcp: explicitly disabled via build config 00:01:43.735 fib: explicitly disabled via build config 00:01:43.735 port: explicitly disabled via build config 00:01:43.735 pdump: explicitly disabled via build config 00:01:43.735 table: explicitly disabled via build config 00:01:43.735 pipeline: explicitly disabled via build config 00:01:43.735 graph: explicitly disabled via build config 00:01:43.735 node: explicitly disabled via build config 00:01:43.735 00:01:43.735 drivers: 00:01:43.735 common/cpt: not in enabled drivers build config 00:01:43.735 common/dpaax: not in enabled drivers build config 00:01:43.735 common/iavf: not in enabled drivers build config 00:01:43.735 common/idpf: not in enabled drivers build config 00:01:43.735 common/ionic: not in enabled drivers build config 00:01:43.735 common/mvep: not in enabled drivers build config 00:01:43.735 common/octeontx: not in enabled drivers build config 00:01:43.735 bus/auxiliary: not in enabled drivers build config 00:01:43.735 bus/cdx: not in enabled drivers build config 00:01:43.735 bus/dpaa: not in enabled drivers build config 00:01:43.735 bus/fslmc: not in enabled drivers build config 00:01:43.735 bus/ifpga: not in enabled drivers build config 00:01:43.735 bus/platform: not in enabled drivers build config 00:01:43.735 bus/uacce: not in enabled drivers build config 00:01:43.735 bus/vmbus: not in enabled drivers build config 00:01:43.735 common/cnxk: not in enabled drivers build config 00:01:43.735 common/mlx5: not in enabled drivers build config 00:01:43.735 common/nfp: not in enabled drivers build config 00:01:43.735 common/nitrox: not in enabled drivers build config 00:01:43.735 common/qat: not in enabled drivers build config 00:01:43.735 common/sfc_efx: not in enabled drivers build config 00:01:43.735 mempool/bucket: not in enabled drivers build config 00:01:43.735 mempool/cnxk: not in enabled drivers build config 00:01:43.735 mempool/dpaa: not in enabled drivers build config 00:01:43.735 mempool/dpaa2: not in enabled drivers build config 00:01:43.735 mempool/octeontx: not in enabled drivers build config 00:01:43.735 mempool/stack: not in enabled drivers build config 00:01:43.735 dma/cnxk: not in enabled drivers build config 00:01:43.735 dma/dpaa: not in enabled drivers build config 00:01:43.735 dma/dpaa2: not in enabled drivers build config 00:01:43.735 dma/hisilicon: not in enabled drivers build config 00:01:43.735 dma/idxd: not in enabled drivers build config 00:01:43.735 dma/ioat: not in enabled drivers build config 00:01:43.735 dma/skeleton: not in enabled drivers build config 00:01:43.735 net/af_packet: not in enabled drivers build config 00:01:43.735 net/af_xdp: not in enabled drivers build config 00:01:43.735 net/ark: not in enabled drivers build config 00:01:43.735 net/atlantic: not in enabled drivers build config 00:01:43.735 net/avp: not in enabled drivers build config 00:01:43.735 net/axgbe: not in enabled drivers build config 00:01:43.735 net/bnx2x: not in enabled drivers build config 00:01:43.735 net/bnxt: not in enabled drivers build config 00:01:43.735 net/bonding: not in enabled drivers build config 00:01:43.735 net/cnxk: not in enabled drivers build config 00:01:43.735 net/cpfl: not in enabled drivers build config 00:01:43.735 net/cxgbe: not in enabled drivers build config 00:01:43.735 net/dpaa: not in enabled drivers build config 00:01:43.735 net/dpaa2: not in enabled drivers build config 00:01:43.735 net/e1000: not in enabled drivers build config 00:01:43.735 net/ena: not in enabled drivers build config 00:01:43.735 net/enetc: not in enabled drivers build config 00:01:43.735 net/enetfec: not in enabled drivers build config 00:01:43.735 net/enic: not in enabled drivers build config 00:01:43.735 net/failsafe: not in enabled drivers build config 00:01:43.735 net/fm10k: not in enabled drivers build config 00:01:43.735 net/gve: not in enabled drivers build config 00:01:43.735 net/hinic: not in enabled drivers build config 00:01:43.735 net/hns3: not in enabled drivers build config 00:01:43.735 net/i40e: not in enabled drivers build config 00:01:43.735 net/iavf: not in enabled drivers build config 00:01:43.735 net/ice: not in enabled drivers build config 00:01:43.735 net/idpf: not in enabled drivers build config 00:01:43.735 net/igc: not in enabled drivers build config 00:01:43.735 net/ionic: not in enabled drivers build config 00:01:43.735 net/ipn3ke: not in enabled drivers build config 00:01:43.735 net/ixgbe: not in enabled drivers build config 00:01:43.735 net/mana: not in enabled drivers build config 00:01:43.735 net/memif: not in enabled drivers build config 00:01:43.735 net/mlx4: not in enabled drivers build config 00:01:43.735 net/mlx5: not in enabled drivers build config 00:01:43.735 net/mvneta: not in enabled drivers build config 00:01:43.735 net/mvpp2: not in enabled drivers build config 00:01:43.735 net/netvsc: not in enabled drivers build config 00:01:43.735 net/nfb: not in enabled drivers build config 00:01:43.735 net/nfp: not in enabled drivers build config 00:01:43.735 net/ngbe: not in enabled drivers build config 00:01:43.735 net/null: not in enabled drivers build config 00:01:43.735 net/octeontx: not in enabled drivers build config 00:01:43.735 net/octeon_ep: not in enabled drivers build config 00:01:43.735 net/pcap: not in enabled drivers build config 00:01:43.735 net/pfe: not in enabled drivers build config 00:01:43.735 net/qede: not in enabled drivers build config 00:01:43.735 net/ring: not in enabled drivers build config 00:01:43.735 net/sfc: not in enabled drivers build config 00:01:43.735 net/softnic: not in enabled drivers build config 00:01:43.735 net/tap: not in enabled drivers build config 00:01:43.735 net/thunderx: not in enabled drivers build config 00:01:43.735 net/txgbe: not in enabled drivers build config 00:01:43.735 net/vdev_netvsc: not in enabled drivers build config 00:01:43.735 net/vhost: not in enabled drivers build config 00:01:43.735 net/virtio: not in enabled drivers build config 00:01:43.735 net/vmxnet3: not in enabled drivers build config 00:01:43.735 raw/*: missing internal dependency, "rawdev" 00:01:43.735 crypto/armv8: not in enabled drivers build config 00:01:43.735 crypto/bcmfs: not in enabled drivers build config 00:01:43.735 crypto/caam_jr: not in enabled drivers build config 00:01:43.735 crypto/ccp: not in enabled drivers build config 00:01:43.736 crypto/cnxk: not in enabled drivers build config 00:01:43.736 crypto/dpaa_sec: not in enabled drivers build config 00:01:43.736 crypto/dpaa2_sec: not in enabled drivers build config 00:01:43.736 crypto/ipsec_mb: not in enabled drivers build config 00:01:43.736 crypto/mlx5: not in enabled drivers build config 00:01:43.736 crypto/mvsam: not in enabled drivers build config 00:01:43.736 crypto/nitrox: not in enabled drivers build config 00:01:43.736 crypto/null: not in enabled drivers build config 00:01:43.736 crypto/octeontx: not in enabled drivers build config 00:01:43.736 crypto/openssl: not in enabled drivers build config 00:01:43.736 crypto/scheduler: not in enabled drivers build config 00:01:43.736 crypto/uadk: not in enabled drivers build config 00:01:43.736 crypto/virtio: not in enabled drivers build config 00:01:43.736 compress/isal: not in enabled drivers build config 00:01:43.736 compress/mlx5: not in enabled drivers build config 00:01:43.736 compress/nitrox: not in enabled drivers build config 00:01:43.736 compress/octeontx: not in enabled drivers build config 00:01:43.736 compress/zlib: not in enabled drivers build config 00:01:43.736 regex/*: missing internal dependency, "regexdev" 00:01:43.736 ml/*: missing internal dependency, "mldev" 00:01:43.736 vdpa/ifc: not in enabled drivers build config 00:01:43.736 vdpa/mlx5: not in enabled drivers build config 00:01:43.736 vdpa/nfp: not in enabled drivers build config 00:01:43.736 vdpa/sfc: not in enabled drivers build config 00:01:43.736 event/*: missing internal dependency, "eventdev" 00:01:43.736 baseband/*: missing internal dependency, "bbdev" 00:01:43.736 gpu/*: missing internal dependency, "gpudev" 00:01:43.736 00:01:43.736 00:01:43.736 Build targets in project: 85 00:01:43.736 00:01:43.736 DPDK 24.03.0 00:01:43.736 00:01:43.736 User defined options 00:01:43.736 buildtype : debug 00:01:43.736 default_library : shared 00:01:43.736 libdir : lib 00:01:43.736 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:43.736 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:43.736 c_link_args : 00:01:43.736 cpu_instruction_set: native 00:01:43.736 disable_apps : test-fib,test-sad,test,test-regex,test-security-perf,test-bbdev,dumpcap,test-crypto-perf,test-flow-perf,test-gpudev,test-cmdline,test-dma-perf,test-eventdev,test-pipeline,test-acl,proc-info,test-compress-perf,graph,test-pmd,test-mldev,pdump 00:01:43.736 disable_libs : bbdev,argparse,latencystats,member,gpudev,mldev,pipeline,lpm,efd,regexdev,sched,node,dispatcher,table,bpf,port,gro,fib,cfgfile,ip_frag,gso,rawdev,ipsec,pdcp,rib,acl,metrics,graph,pcapng,jobstats,eventdev,stack,bitratestats,distributor,pdump 00:01:43.736 enable_docs : false 00:01:43.736 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:43.736 enable_kmods : false 00:01:43.736 max_lcores : 128 00:01:43.736 tests : false 00:01:43.736 00:01:43.736 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:43.736 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:43.736 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:43.736 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:43.736 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:43.736 [4/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:43.736 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:43.736 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:43.736 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:43.736 [8/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:43.736 [9/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:43.736 [10/268] Linking static target lib/librte_kvargs.a 00:01:43.736 [11/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:43.736 [12/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:43.736 [13/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:43.736 [14/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:43.736 [15/268] Linking static target lib/librte_log.a 00:01:43.736 [16/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:44.313 [17/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.571 [18/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:44.571 [19/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:44.571 [20/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:44.571 [21/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:44.571 [22/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:44.571 [23/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:44.571 [24/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:44.571 [25/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:44.571 [26/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:44.571 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:44.571 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:44.571 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:44.571 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:44.571 [31/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:44.571 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:44.571 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:44.571 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:44.571 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:44.571 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:44.571 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:44.571 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:44.571 [39/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:44.571 [40/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:44.571 [41/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:44.571 [42/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:44.571 [43/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:44.571 [44/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:44.571 [45/268] Linking static target lib/librte_telemetry.a 00:01:44.571 [46/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:44.571 [47/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:44.571 [48/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:44.571 [49/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:44.571 [50/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:44.571 [51/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:44.571 [52/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:44.832 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:44.832 [54/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:44.832 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:44.832 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:44.832 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:44.832 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:44.832 [59/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:44.832 [60/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:44.832 [61/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:44.832 [62/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:44.832 [63/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:44.832 [64/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:44.832 [65/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:44.832 [66/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:45.093 [67/268] Linking target lib/librte_log.so.24.1 00:01:45.093 [68/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:45.093 [69/268] Linking static target lib/librte_pci.a 00:01:45.093 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:45.361 [71/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:45.361 [72/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:45.361 [73/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:45.361 [74/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:45.361 [75/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:45.361 [76/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:45.361 [77/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:45.361 [78/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:45.361 [79/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:45.621 [80/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:45.621 [81/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:45.621 [82/268] Linking target lib/librte_kvargs.so.24.1 00:01:45.621 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:45.621 [84/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:45.621 [85/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:45.621 [86/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:45.621 [87/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:45.621 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:45.622 [89/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:45.622 [90/268] Linking static target lib/librte_ring.a 00:01:45.622 [91/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:45.622 [92/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:45.622 [93/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:45.622 [94/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:45.622 [95/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:45.622 [96/268] Linking static target lib/librte_meter.a 00:01:45.622 [97/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:45.622 [98/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:45.622 [99/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:45.622 [100/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:45.622 [101/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:45.622 [102/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:45.622 [103/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:45.622 [104/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:45.622 [105/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:45.622 [106/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:45.622 [107/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.622 [108/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:45.622 [109/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:45.622 [110/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:45.883 [111/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:45.883 [112/268] Linking target lib/librte_telemetry.so.24.1 00:01:45.883 [113/268] Linking static target lib/librte_eal.a 00:01:45.883 [114/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:45.883 [115/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:45.883 [116/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:45.883 [117/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:45.883 [118/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:45.883 [119/268] Linking static target lib/librte_rcu.a 00:01:45.883 [120/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:45.883 [121/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:45.883 [122/268] Linking static target lib/librte_mempool.a 00:01:45.883 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:45.883 [124/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:45.883 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:45.883 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:46.148 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:46.148 [128/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:46.148 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:46.148 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:46.148 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:46.148 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.148 [133/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:46.148 [134/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:46.148 [135/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:46.148 [136/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:46.148 [137/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:46.407 [138/268] Linking static target lib/librte_net.a 00:01:46.407 [139/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:46.407 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:46.407 [141/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.407 [142/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:46.407 [143/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:46.407 [144/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:46.407 [145/268] Linking static target lib/librte_cmdline.a 00:01:46.407 [146/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:46.407 [147/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:46.407 [148/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:46.667 [149/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.667 [150/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:46.667 [151/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:46.667 [152/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:46.667 [153/268] Linking static target lib/librte_timer.a 00:01:46.667 [154/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:46.667 [155/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:46.667 [156/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:46.667 [157/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:46.667 [158/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:46.667 [159/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.667 [160/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:46.667 [161/268] Linking static target lib/librte_dmadev.a 00:01:46.927 [162/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:46.927 [163/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:46.927 [164/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:46.927 [165/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:46.927 [166/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:46.927 [167/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:46.927 [168/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:46.927 [169/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:46.927 [170/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:47.186 [171/268] Linking static target lib/librte_power.a 00:01:47.186 [172/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.186 [173/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:47.186 [174/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:47.186 [175/268] Linking static target lib/librte_hash.a 00:01:47.186 [176/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:47.186 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:47.186 [178/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:47.186 [179/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:47.186 [180/268] Linking static target lib/librte_compressdev.a 00:01:47.186 [181/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:47.186 [182/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:47.186 [183/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:47.186 [184/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:47.186 [185/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:47.186 [186/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:47.186 [187/268] Linking static target lib/librte_reorder.a 00:01:47.186 [188/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.186 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:47.444 [190/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:47.444 [191/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:47.444 [192/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:47.444 [193/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.444 [194/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:47.444 [195/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:47.444 [196/268] Linking static target lib/librte_mbuf.a 00:01:47.444 [197/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:47.444 [198/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:47.444 [199/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:47.444 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:47.444 [201/268] Linking static target lib/librte_security.a 00:01:47.444 [202/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.444 [203/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:47.444 [204/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:47.444 [205/268] Linking static target drivers/librte_bus_vdev.a 00:01:47.444 [206/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.444 [207/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:47.444 [208/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.444 [209/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:47.444 [210/268] Linking static target drivers/librte_bus_pci.a 00:01:47.702 [211/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.702 [212/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:47.702 [213/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.702 [214/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:47.702 [215/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.702 [216/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.702 [217/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:47.702 [218/268] Linking static target drivers/librte_mempool_ring.a 00:01:47.702 [219/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:47.702 [220/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.702 [221/268] Linking static target lib/librte_ethdev.a 00:01:47.960 [222/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.960 [223/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:47.960 [224/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:48.219 [225/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:48.219 [226/268] Linking static target lib/librte_cryptodev.a 00:01:49.153 [227/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:50.527 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:51.900 [229/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.158 [230/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:01:52.158 [231/268] Linking target lib/librte_eal.so.24.1 00:01:52.417 [232/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:01:52.417 [233/268] Linking target lib/librte_ring.so.24.1 00:01:52.417 [234/268] Linking target lib/librte_timer.so.24.1 00:01:52.417 [235/268] Linking target lib/librte_pci.so.24.1 00:01:52.417 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:01:52.417 [237/268] Linking target lib/librte_meter.so.24.1 00:01:52.417 [238/268] Linking target lib/librte_dmadev.so.24.1 00:01:52.417 [239/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:01:52.417 [240/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:01:52.417 [241/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:01:52.417 [242/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:01:52.417 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:01:52.417 [244/268] Linking target lib/librte_rcu.so.24.1 00:01:52.417 [245/268] Linking target lib/librte_mempool.so.24.1 00:01:52.417 [246/268] Linking target drivers/librte_bus_pci.so.24.1 00:01:52.675 [247/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:01:52.675 [248/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:01:52.675 [249/268] Linking target drivers/librte_mempool_ring.so.24.1 00:01:52.675 [250/268] Linking target lib/librte_mbuf.so.24.1 00:01:52.675 [251/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:01:52.933 [252/268] Linking target lib/librte_net.so.24.1 00:01:52.933 [253/268] Linking target lib/librte_reorder.so.24.1 00:01:52.933 [254/268] Linking target lib/librte_compressdev.so.24.1 00:01:52.933 [255/268] Linking target lib/librte_cryptodev.so.24.1 00:01:52.933 [256/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:01:52.933 [257/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:01:52.933 [258/268] Linking target lib/librte_hash.so.24.1 00:01:52.933 [259/268] Linking target lib/librte_cmdline.so.24.1 00:01:52.933 [260/268] Linking target lib/librte_security.so.24.1 00:01:52.933 [261/268] Linking target lib/librte_ethdev.so.24.1 00:01:53.191 [262/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:01:53.191 [263/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:01:53.191 [264/268] Linking target lib/librte_power.so.24.1 00:01:56.481 [265/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:56.481 [266/268] Linking static target lib/librte_vhost.a 00:01:57.048 [267/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.048 [268/268] Linking target lib/librte_vhost.so.24.1 00:01:57.048 INFO: autodetecting backend as ninja 00:01:57.048 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 48 00:01:57.982 CC lib/log/log.o 00:01:57.982 CC lib/log/log_flags.o 00:01:57.982 CC lib/log/log_deprecated.o 00:01:57.982 CC lib/ut_mock/mock.o 00:01:57.982 CC lib/ut/ut.o 00:01:58.240 LIB libspdk_log.a 00:01:58.240 LIB libspdk_ut_mock.a 00:01:58.240 LIB libspdk_ut.a 00:01:58.240 SO libspdk_ut.so.2.0 00:01:58.240 SO libspdk_ut_mock.so.6.0 00:01:58.240 SO libspdk_log.so.7.0 00:01:58.240 SYMLINK libspdk_ut_mock.so 00:01:58.240 SYMLINK libspdk_ut.so 00:01:58.240 SYMLINK libspdk_log.so 00:01:58.498 CC lib/ioat/ioat.o 00:01:58.498 CC lib/util/base64.o 00:01:58.498 CC lib/dma/dma.o 00:01:58.498 CXX lib/trace_parser/trace.o 00:01:58.498 CC lib/util/bit_array.o 00:01:58.498 CC lib/util/cpuset.o 00:01:58.498 CC lib/util/crc16.o 00:01:58.498 CC lib/util/crc32.o 00:01:58.498 CC lib/util/crc32c.o 00:01:58.498 CC lib/util/crc32_ieee.o 00:01:58.498 CC lib/util/crc64.o 00:01:58.498 CC lib/util/dif.o 00:01:58.498 CC lib/util/fd.o 00:01:58.498 CC lib/util/fd_group.o 00:01:58.498 CC lib/util/file.o 00:01:58.498 CC lib/util/hexlify.o 00:01:58.498 CC lib/util/iov.o 00:01:58.498 CC lib/util/math.o 00:01:58.498 CC lib/util/net.o 00:01:58.498 CC lib/util/pipe.o 00:01:58.499 CC lib/util/strerror_tls.o 00:01:58.499 CC lib/util/string.o 00:01:58.499 CC lib/util/uuid.o 00:01:58.499 CC lib/util/xor.o 00:01:58.499 CC lib/util/zipf.o 00:01:58.499 CC lib/vfio_user/host/vfio_user_pci.o 00:01:58.499 CC lib/vfio_user/host/vfio_user.o 00:01:58.756 LIB libspdk_dma.a 00:01:58.756 SO libspdk_dma.so.4.0 00:01:58.756 SYMLINK libspdk_dma.so 00:01:58.756 LIB libspdk_ioat.a 00:01:58.756 SO libspdk_ioat.so.7.0 00:01:58.756 SYMLINK libspdk_ioat.so 00:01:58.756 LIB libspdk_vfio_user.a 00:01:58.756 SO libspdk_vfio_user.so.5.0 00:01:59.040 SYMLINK libspdk_vfio_user.so 00:01:59.040 LIB libspdk_util.a 00:01:59.040 SO libspdk_util.so.10.0 00:01:59.297 SYMLINK libspdk_util.so 00:01:59.297 CC lib/rdma_provider/common.o 00:01:59.297 CC lib/rdma_utils/rdma_utils.o 00:01:59.297 CC lib/idxd/idxd.o 00:01:59.297 CC lib/json/json_parse.o 00:01:59.297 CC lib/vmd/vmd.o 00:01:59.297 CC lib/conf/conf.o 00:01:59.297 CC lib/rdma_provider/rdma_provider_verbs.o 00:01:59.297 CC lib/idxd/idxd_user.o 00:01:59.297 CC lib/vmd/led.o 00:01:59.297 CC lib/json/json_util.o 00:01:59.297 CC lib/idxd/idxd_kernel.o 00:01:59.297 CC lib/json/json_write.o 00:01:59.297 CC lib/env_dpdk/env.o 00:01:59.297 CC lib/env_dpdk/memory.o 00:01:59.297 CC lib/env_dpdk/pci.o 00:01:59.297 CC lib/env_dpdk/init.o 00:01:59.297 CC lib/env_dpdk/threads.o 00:01:59.297 CC lib/env_dpdk/pci_ioat.o 00:01:59.297 CC lib/env_dpdk/pci_virtio.o 00:01:59.297 CC lib/env_dpdk/pci_vmd.o 00:01:59.297 CC lib/env_dpdk/pci_idxd.o 00:01:59.297 CC lib/env_dpdk/pci_event.o 00:01:59.297 CC lib/env_dpdk/sigbus_handler.o 00:01:59.297 CC lib/env_dpdk/pci_dpdk_2207.o 00:01:59.297 CC lib/env_dpdk/pci_dpdk.o 00:01:59.297 CC lib/env_dpdk/pci_dpdk_2211.o 00:01:59.297 LIB libspdk_trace_parser.a 00:01:59.553 SO libspdk_trace_parser.so.5.0 00:01:59.553 SYMLINK libspdk_trace_parser.so 00:01:59.553 LIB libspdk_rdma_provider.a 00:01:59.553 SO libspdk_rdma_provider.so.6.0 00:01:59.811 LIB libspdk_rdma_utils.a 00:01:59.811 SYMLINK libspdk_rdma_provider.so 00:01:59.811 LIB libspdk_json.a 00:01:59.811 SO libspdk_rdma_utils.so.1.0 00:01:59.811 LIB libspdk_conf.a 00:01:59.811 SO libspdk_conf.so.6.0 00:01:59.811 SO libspdk_json.so.6.0 00:01:59.811 SYMLINK libspdk_rdma_utils.so 00:01:59.811 SYMLINK libspdk_conf.so 00:01:59.811 SYMLINK libspdk_json.so 00:02:00.069 CC lib/jsonrpc/jsonrpc_server.o 00:02:00.069 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:00.069 CC lib/jsonrpc/jsonrpc_client.o 00:02:00.069 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:00.069 LIB libspdk_idxd.a 00:02:00.069 SO libspdk_idxd.so.12.0 00:02:00.069 SYMLINK libspdk_idxd.so 00:02:00.069 LIB libspdk_vmd.a 00:02:00.069 SO libspdk_vmd.so.6.0 00:02:00.069 SYMLINK libspdk_vmd.so 00:02:00.326 LIB libspdk_jsonrpc.a 00:02:00.326 SO libspdk_jsonrpc.so.6.0 00:02:00.326 SYMLINK libspdk_jsonrpc.so 00:02:00.583 CC lib/rpc/rpc.o 00:02:00.841 LIB libspdk_rpc.a 00:02:00.841 SO libspdk_rpc.so.6.0 00:02:00.841 SYMLINK libspdk_rpc.so 00:02:01.099 CC lib/keyring/keyring.o 00:02:01.099 CC lib/notify/notify.o 00:02:01.099 CC lib/keyring/keyring_rpc.o 00:02:01.099 CC lib/notify/notify_rpc.o 00:02:01.099 CC lib/trace/trace.o 00:02:01.099 CC lib/trace/trace_flags.o 00:02:01.099 CC lib/trace/trace_rpc.o 00:02:01.099 LIB libspdk_notify.a 00:02:01.099 SO libspdk_notify.so.6.0 00:02:01.356 SYMLINK libspdk_notify.so 00:02:01.356 LIB libspdk_keyring.a 00:02:01.356 LIB libspdk_trace.a 00:02:01.356 SO libspdk_keyring.so.1.0 00:02:01.356 SO libspdk_trace.so.10.0 00:02:01.356 SYMLINK libspdk_keyring.so 00:02:01.356 SYMLINK libspdk_trace.so 00:02:01.356 LIB libspdk_env_dpdk.a 00:02:01.356 SO libspdk_env_dpdk.so.15.0 00:02:01.614 CC lib/sock/sock.o 00:02:01.614 CC lib/sock/sock_rpc.o 00:02:01.614 CC lib/thread/thread.o 00:02:01.614 CC lib/thread/iobuf.o 00:02:01.614 SYMLINK libspdk_env_dpdk.so 00:02:01.872 LIB libspdk_sock.a 00:02:01.872 SO libspdk_sock.so.10.0 00:02:01.872 SYMLINK libspdk_sock.so 00:02:02.129 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:02.129 CC lib/nvme/nvme_ctrlr.o 00:02:02.129 CC lib/nvme/nvme_fabric.o 00:02:02.129 CC lib/nvme/nvme_ns_cmd.o 00:02:02.129 CC lib/nvme/nvme_ns.o 00:02:02.129 CC lib/nvme/nvme_pcie_common.o 00:02:02.129 CC lib/nvme/nvme_pcie.o 00:02:02.129 CC lib/nvme/nvme_qpair.o 00:02:02.129 CC lib/nvme/nvme.o 00:02:02.129 CC lib/nvme/nvme_quirks.o 00:02:02.129 CC lib/nvme/nvme_transport.o 00:02:02.129 CC lib/nvme/nvme_discovery.o 00:02:02.129 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:02.129 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:02.129 CC lib/nvme/nvme_tcp.o 00:02:02.129 CC lib/nvme/nvme_opal.o 00:02:02.129 CC lib/nvme/nvme_io_msg.o 00:02:02.129 CC lib/nvme/nvme_poll_group.o 00:02:02.129 CC lib/nvme/nvme_zns.o 00:02:02.129 CC lib/nvme/nvme_stubs.o 00:02:02.129 CC lib/nvme/nvme_auth.o 00:02:02.129 CC lib/nvme/nvme_cuse.o 00:02:02.129 CC lib/nvme/nvme_vfio_user.o 00:02:02.129 CC lib/nvme/nvme_rdma.o 00:02:03.062 LIB libspdk_thread.a 00:02:03.062 SO libspdk_thread.so.10.1 00:02:03.062 SYMLINK libspdk_thread.so 00:02:03.319 CC lib/vfu_tgt/tgt_endpoint.o 00:02:03.319 CC lib/init/json_config.o 00:02:03.319 CC lib/blob/blobstore.o 00:02:03.319 CC lib/virtio/virtio.o 00:02:03.319 CC lib/init/subsystem.o 00:02:03.319 CC lib/accel/accel.o 00:02:03.319 CC lib/vfu_tgt/tgt_rpc.o 00:02:03.319 CC lib/blob/request.o 00:02:03.319 CC lib/virtio/virtio_vhost_user.o 00:02:03.319 CC lib/init/subsystem_rpc.o 00:02:03.319 CC lib/blob/zeroes.o 00:02:03.319 CC lib/accel/accel_rpc.o 00:02:03.319 CC lib/blob/blob_bs_dev.o 00:02:03.319 CC lib/virtio/virtio_vfio_user.o 00:02:03.319 CC lib/init/rpc.o 00:02:03.319 CC lib/accel/accel_sw.o 00:02:03.319 CC lib/virtio/virtio_pci.o 00:02:03.577 LIB libspdk_init.a 00:02:03.577 SO libspdk_init.so.5.0 00:02:03.577 LIB libspdk_virtio.a 00:02:03.577 LIB libspdk_vfu_tgt.a 00:02:03.577 SYMLINK libspdk_init.so 00:02:03.577 SO libspdk_vfu_tgt.so.3.0 00:02:03.577 SO libspdk_virtio.so.7.0 00:02:03.834 SYMLINK libspdk_vfu_tgt.so 00:02:03.834 SYMLINK libspdk_virtio.so 00:02:03.834 CC lib/event/app.o 00:02:03.834 CC lib/event/reactor.o 00:02:03.834 CC lib/event/log_rpc.o 00:02:03.834 CC lib/event/app_rpc.o 00:02:03.834 CC lib/event/scheduler_static.o 00:02:04.400 LIB libspdk_event.a 00:02:04.400 SO libspdk_event.so.14.0 00:02:04.400 SYMLINK libspdk_event.so 00:02:04.400 LIB libspdk_accel.a 00:02:04.400 SO libspdk_accel.so.16.0 00:02:04.400 SYMLINK libspdk_accel.so 00:02:04.658 LIB libspdk_nvme.a 00:02:04.658 CC lib/bdev/bdev.o 00:02:04.658 CC lib/bdev/bdev_rpc.o 00:02:04.658 CC lib/bdev/bdev_zone.o 00:02:04.658 CC lib/bdev/part.o 00:02:04.658 CC lib/bdev/scsi_nvme.o 00:02:04.658 SO libspdk_nvme.so.13.1 00:02:04.917 SYMLINK libspdk_nvme.so 00:02:06.293 LIB libspdk_blob.a 00:02:06.293 SO libspdk_blob.so.11.0 00:02:06.293 SYMLINK libspdk_blob.so 00:02:06.552 CC lib/blobfs/blobfs.o 00:02:06.552 CC lib/blobfs/tree.o 00:02:06.552 CC lib/lvol/lvol.o 00:02:07.118 LIB libspdk_bdev.a 00:02:07.118 SO libspdk_bdev.so.16.0 00:02:07.382 SYMLINK libspdk_bdev.so 00:02:07.382 LIB libspdk_blobfs.a 00:02:07.382 SO libspdk_blobfs.so.10.0 00:02:07.382 CC lib/scsi/dev.o 00:02:07.382 CC lib/ublk/ublk.o 00:02:07.382 CC lib/nvmf/ctrlr.o 00:02:07.382 CC lib/scsi/lun.o 00:02:07.382 CC lib/ublk/ublk_rpc.o 00:02:07.382 CC lib/nbd/nbd.o 00:02:07.382 CC lib/nvmf/ctrlr_discovery.o 00:02:07.382 CC lib/ftl/ftl_core.o 00:02:07.382 CC lib/nbd/nbd_rpc.o 00:02:07.382 CC lib/nvmf/ctrlr_bdev.o 00:02:07.382 CC lib/scsi/port.o 00:02:07.382 CC lib/ftl/ftl_init.o 00:02:07.382 CC lib/scsi/scsi.o 00:02:07.382 CC lib/nvmf/subsystem.o 00:02:07.382 CC lib/scsi/scsi_bdev.o 00:02:07.382 CC lib/ftl/ftl_layout.o 00:02:07.382 CC lib/nvmf/nvmf.o 00:02:07.382 CC lib/scsi/scsi_pr.o 00:02:07.382 CC lib/ftl/ftl_debug.o 00:02:07.382 CC lib/ftl/ftl_io.o 00:02:07.382 CC lib/nvmf/transport.o 00:02:07.382 CC lib/nvmf/nvmf_rpc.o 00:02:07.382 CC lib/scsi/scsi_rpc.o 00:02:07.382 CC lib/ftl/ftl_sb.o 00:02:07.382 CC lib/scsi/task.o 00:02:07.382 CC lib/nvmf/tcp.o 00:02:07.382 CC lib/nvmf/stubs.o 00:02:07.382 CC lib/ftl/ftl_l2p.o 00:02:07.382 CC lib/ftl/ftl_l2p_flat.o 00:02:07.382 CC lib/nvmf/vfio_user.o 00:02:07.382 CC lib/nvmf/mdns_server.o 00:02:07.382 CC lib/ftl/ftl_nv_cache.o 00:02:07.382 CC lib/ftl/ftl_band.o 00:02:07.382 CC lib/nvmf/rdma.o 00:02:07.382 CC lib/ftl/ftl_band_ops.o 00:02:07.382 CC lib/nvmf/auth.o 00:02:07.382 CC lib/ftl/ftl_writer.o 00:02:07.382 CC lib/ftl/ftl_rq.o 00:02:07.382 CC lib/ftl/ftl_reloc.o 00:02:07.382 CC lib/ftl/ftl_l2p_cache.o 00:02:07.382 SYMLINK libspdk_blobfs.so 00:02:07.382 CC lib/ftl/ftl_p2l.o 00:02:07.382 CC lib/ftl/mngt/ftl_mngt.o 00:02:07.382 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:07.382 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:07.382 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:07.382 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:07.382 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:07.640 LIB libspdk_lvol.a 00:02:07.640 SO libspdk_lvol.so.10.0 00:02:07.640 SYMLINK libspdk_lvol.so 00:02:07.640 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:07.907 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:07.907 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:07.907 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:07.907 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:07.907 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:07.907 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:07.907 CC lib/ftl/utils/ftl_conf.o 00:02:07.907 CC lib/ftl/utils/ftl_mempool.o 00:02:07.907 CC lib/ftl/utils/ftl_md.o 00:02:07.907 CC lib/ftl/utils/ftl_bitmap.o 00:02:07.907 CC lib/ftl/utils/ftl_property.o 00:02:07.907 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:07.907 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:07.907 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:07.907 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:07.907 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:07.907 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:07.907 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:07.907 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:08.166 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:08.166 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:08.166 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:08.166 CC lib/ftl/base/ftl_base_dev.o 00:02:08.166 CC lib/ftl/base/ftl_base_bdev.o 00:02:08.166 CC lib/ftl/ftl_trace.o 00:02:08.166 LIB libspdk_nbd.a 00:02:08.166 SO libspdk_nbd.so.7.0 00:02:08.424 SYMLINK libspdk_nbd.so 00:02:08.424 LIB libspdk_scsi.a 00:02:08.424 SO libspdk_scsi.so.9.0 00:02:08.424 LIB libspdk_ublk.a 00:02:08.424 SO libspdk_ublk.so.3.0 00:02:08.683 SYMLINK libspdk_scsi.so 00:02:08.683 SYMLINK libspdk_ublk.so 00:02:08.683 CC lib/vhost/vhost.o 00:02:08.683 CC lib/iscsi/conn.o 00:02:08.683 CC lib/iscsi/init_grp.o 00:02:08.683 CC lib/vhost/vhost_rpc.o 00:02:08.683 CC lib/iscsi/iscsi.o 00:02:08.683 CC lib/vhost/vhost_scsi.o 00:02:08.683 CC lib/iscsi/md5.o 00:02:08.683 CC lib/vhost/vhost_blk.o 00:02:08.683 CC lib/iscsi/param.o 00:02:08.683 CC lib/vhost/rte_vhost_user.o 00:02:08.683 CC lib/iscsi/portal_grp.o 00:02:08.683 CC lib/iscsi/tgt_node.o 00:02:08.683 CC lib/iscsi/iscsi_subsystem.o 00:02:08.683 CC lib/iscsi/iscsi_rpc.o 00:02:08.683 CC lib/iscsi/task.o 00:02:08.941 LIB libspdk_ftl.a 00:02:09.200 SO libspdk_ftl.so.9.0 00:02:09.458 SYMLINK libspdk_ftl.so 00:02:10.024 LIB libspdk_vhost.a 00:02:10.024 SO libspdk_vhost.so.8.0 00:02:10.024 LIB libspdk_nvmf.a 00:02:10.024 SYMLINK libspdk_vhost.so 00:02:10.024 SO libspdk_nvmf.so.19.0 00:02:10.024 LIB libspdk_iscsi.a 00:02:10.283 SO libspdk_iscsi.so.8.0 00:02:10.283 SYMLINK libspdk_nvmf.so 00:02:10.283 SYMLINK libspdk_iscsi.so 00:02:10.542 CC module/env_dpdk/env_dpdk_rpc.o 00:02:10.542 CC module/vfu_device/vfu_virtio.o 00:02:10.542 CC module/vfu_device/vfu_virtio_blk.o 00:02:10.542 CC module/vfu_device/vfu_virtio_scsi.o 00:02:10.542 CC module/vfu_device/vfu_virtio_rpc.o 00:02:10.801 CC module/accel/error/accel_error.o 00:02:10.801 CC module/accel/ioat/accel_ioat.o 00:02:10.801 CC module/sock/posix/posix.o 00:02:10.801 CC module/scheduler/gscheduler/gscheduler.o 00:02:10.801 CC module/accel/dsa/accel_dsa.o 00:02:10.801 CC module/accel/error/accel_error_rpc.o 00:02:10.801 CC module/accel/ioat/accel_ioat_rpc.o 00:02:10.801 CC module/accel/dsa/accel_dsa_rpc.o 00:02:10.801 CC module/keyring/linux/keyring.o 00:02:10.801 CC module/blob/bdev/blob_bdev.o 00:02:10.801 CC module/keyring/linux/keyring_rpc.o 00:02:10.801 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:10.801 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:10.801 CC module/accel/iaa/accel_iaa.o 00:02:10.801 CC module/keyring/file/keyring.o 00:02:10.801 CC module/accel/iaa/accel_iaa_rpc.o 00:02:10.801 CC module/keyring/file/keyring_rpc.o 00:02:10.801 LIB libspdk_env_dpdk_rpc.a 00:02:10.801 SO libspdk_env_dpdk_rpc.so.6.0 00:02:10.801 SYMLINK libspdk_env_dpdk_rpc.so 00:02:10.801 LIB libspdk_keyring_linux.a 00:02:10.801 LIB libspdk_keyring_file.a 00:02:10.801 LIB libspdk_scheduler_dpdk_governor.a 00:02:10.801 SO libspdk_keyring_linux.so.1.0 00:02:10.801 SO libspdk_keyring_file.so.1.0 00:02:10.801 LIB libspdk_scheduler_gscheduler.a 00:02:10.801 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:10.801 LIB libspdk_accel_error.a 00:02:10.801 LIB libspdk_scheduler_dynamic.a 00:02:10.801 LIB libspdk_accel_ioat.a 00:02:11.059 SO libspdk_scheduler_gscheduler.so.4.0 00:02:11.059 LIB libspdk_accel_iaa.a 00:02:11.059 SO libspdk_accel_error.so.2.0 00:02:11.059 SO libspdk_scheduler_dynamic.so.4.0 00:02:11.059 SO libspdk_accel_ioat.so.6.0 00:02:11.059 SYMLINK libspdk_keyring_linux.so 00:02:11.059 SYMLINK libspdk_keyring_file.so 00:02:11.059 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:11.059 SO libspdk_accel_iaa.so.3.0 00:02:11.059 LIB libspdk_accel_dsa.a 00:02:11.059 SYMLINK libspdk_scheduler_gscheduler.so 00:02:11.059 SYMLINK libspdk_accel_error.so 00:02:11.059 SYMLINK libspdk_scheduler_dynamic.so 00:02:11.059 LIB libspdk_blob_bdev.a 00:02:11.059 SYMLINK libspdk_accel_ioat.so 00:02:11.059 SO libspdk_accel_dsa.so.5.0 00:02:11.059 SYMLINK libspdk_accel_iaa.so 00:02:11.059 SO libspdk_blob_bdev.so.11.0 00:02:11.059 SYMLINK libspdk_accel_dsa.so 00:02:11.059 SYMLINK libspdk_blob_bdev.so 00:02:11.320 LIB libspdk_vfu_device.a 00:02:11.320 SO libspdk_vfu_device.so.3.0 00:02:11.320 CC module/bdev/malloc/bdev_malloc.o 00:02:11.320 CC module/blobfs/bdev/blobfs_bdev.o 00:02:11.320 CC module/bdev/passthru/vbdev_passthru.o 00:02:11.320 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:11.320 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:11.320 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:11.320 CC module/bdev/delay/vbdev_delay.o 00:02:11.320 CC module/bdev/lvol/vbdev_lvol.o 00:02:11.320 CC module/bdev/split/vbdev_split.o 00:02:11.320 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:11.320 CC module/bdev/split/vbdev_split_rpc.o 00:02:11.320 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:11.320 CC module/bdev/gpt/gpt.o 00:02:11.320 CC module/bdev/null/bdev_null.o 00:02:11.320 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:11.320 CC module/bdev/gpt/vbdev_gpt.o 00:02:11.320 CC module/bdev/error/vbdev_error.o 00:02:11.320 CC module/bdev/nvme/bdev_nvme.o 00:02:11.320 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:11.320 CC module/bdev/aio/bdev_aio.o 00:02:11.320 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:11.320 CC module/bdev/null/bdev_null_rpc.o 00:02:11.320 CC module/bdev/aio/bdev_aio_rpc.o 00:02:11.320 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:11.320 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:11.320 CC module/bdev/iscsi/bdev_iscsi.o 00:02:11.320 CC module/bdev/error/vbdev_error_rpc.o 00:02:11.320 CC module/bdev/raid/bdev_raid.o 00:02:11.320 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:11.320 CC module/bdev/nvme/nvme_rpc.o 00:02:11.320 CC module/bdev/raid/bdev_raid_rpc.o 00:02:11.320 CC module/bdev/nvme/bdev_mdns_client.o 00:02:11.320 CC module/bdev/ftl/bdev_ftl.o 00:02:11.320 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:11.320 CC module/bdev/raid/bdev_raid_sb.o 00:02:11.320 CC module/bdev/raid/raid0.o 00:02:11.320 CC module/bdev/nvme/vbdev_opal.o 00:02:11.320 CC module/bdev/raid/raid1.o 00:02:11.320 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:11.320 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:11.320 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:11.320 CC module/bdev/raid/concat.o 00:02:11.320 SYMLINK libspdk_vfu_device.so 00:02:11.578 LIB libspdk_sock_posix.a 00:02:11.578 SO libspdk_sock_posix.so.6.0 00:02:11.578 SYMLINK libspdk_sock_posix.so 00:02:11.837 LIB libspdk_blobfs_bdev.a 00:02:11.837 LIB libspdk_bdev_gpt.a 00:02:11.837 SO libspdk_blobfs_bdev.so.6.0 00:02:11.837 SO libspdk_bdev_gpt.so.6.0 00:02:11.837 LIB libspdk_bdev_split.a 00:02:11.837 SYMLINK libspdk_blobfs_bdev.so 00:02:11.837 SO libspdk_bdev_split.so.6.0 00:02:11.837 SYMLINK libspdk_bdev_gpt.so 00:02:11.837 LIB libspdk_bdev_null.a 00:02:11.837 LIB libspdk_bdev_error.a 00:02:11.837 LIB libspdk_bdev_aio.a 00:02:11.837 SO libspdk_bdev_null.so.6.0 00:02:11.837 LIB libspdk_bdev_passthru.a 00:02:11.837 SO libspdk_bdev_error.so.6.0 00:02:11.837 LIB libspdk_bdev_ftl.a 00:02:11.837 SYMLINK libspdk_bdev_split.so 00:02:11.837 SO libspdk_bdev_aio.so.6.0 00:02:11.837 LIB libspdk_bdev_delay.a 00:02:11.837 SO libspdk_bdev_passthru.so.6.0 00:02:11.837 SO libspdk_bdev_ftl.so.6.0 00:02:11.837 LIB libspdk_bdev_malloc.a 00:02:11.837 SO libspdk_bdev_delay.so.6.0 00:02:11.837 LIB libspdk_bdev_zone_block.a 00:02:11.837 SYMLINK libspdk_bdev_null.so 00:02:11.837 LIB libspdk_bdev_iscsi.a 00:02:11.837 SYMLINK libspdk_bdev_error.so 00:02:11.837 SO libspdk_bdev_malloc.so.6.0 00:02:11.837 SO libspdk_bdev_zone_block.so.6.0 00:02:11.837 SYMLINK libspdk_bdev_aio.so 00:02:11.837 SO libspdk_bdev_iscsi.so.6.0 00:02:11.837 SYMLINK libspdk_bdev_passthru.so 00:02:12.096 SYMLINK libspdk_bdev_ftl.so 00:02:12.096 SYMLINK libspdk_bdev_delay.so 00:02:12.096 SYMLINK libspdk_bdev_malloc.so 00:02:12.096 SYMLINK libspdk_bdev_zone_block.so 00:02:12.096 SYMLINK libspdk_bdev_iscsi.so 00:02:12.096 LIB libspdk_bdev_lvol.a 00:02:12.096 LIB libspdk_bdev_virtio.a 00:02:12.096 SO libspdk_bdev_lvol.so.6.0 00:02:12.096 SO libspdk_bdev_virtio.so.6.0 00:02:12.096 SYMLINK libspdk_bdev_lvol.so 00:02:12.096 SYMLINK libspdk_bdev_virtio.so 00:02:12.662 LIB libspdk_bdev_raid.a 00:02:12.662 SO libspdk_bdev_raid.so.6.0 00:02:12.662 SYMLINK libspdk_bdev_raid.so 00:02:13.598 LIB libspdk_bdev_nvme.a 00:02:13.857 SO libspdk_bdev_nvme.so.7.0 00:02:13.857 SYMLINK libspdk_bdev_nvme.so 00:02:14.115 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:14.115 CC module/event/subsystems/sock/sock.o 00:02:14.115 CC module/event/subsystems/keyring/keyring.o 00:02:14.115 CC module/event/subsystems/vmd/vmd.o 00:02:14.115 CC module/event/subsystems/scheduler/scheduler.o 00:02:14.115 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:14.115 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:14.115 CC module/event/subsystems/iobuf/iobuf.o 00:02:14.115 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:14.373 LIB libspdk_event_keyring.a 00:02:14.373 LIB libspdk_event_vhost_blk.a 00:02:14.373 LIB libspdk_event_vfu_tgt.a 00:02:14.373 LIB libspdk_event_scheduler.a 00:02:14.373 LIB libspdk_event_vmd.a 00:02:14.373 LIB libspdk_event_sock.a 00:02:14.373 LIB libspdk_event_iobuf.a 00:02:14.373 SO libspdk_event_keyring.so.1.0 00:02:14.373 SO libspdk_event_vfu_tgt.so.3.0 00:02:14.373 SO libspdk_event_vhost_blk.so.3.0 00:02:14.373 SO libspdk_event_scheduler.so.4.0 00:02:14.373 SO libspdk_event_sock.so.5.0 00:02:14.373 SO libspdk_event_vmd.so.6.0 00:02:14.373 SO libspdk_event_iobuf.so.3.0 00:02:14.373 SYMLINK libspdk_event_keyring.so 00:02:14.373 SYMLINK libspdk_event_vhost_blk.so 00:02:14.373 SYMLINK libspdk_event_vfu_tgt.so 00:02:14.373 SYMLINK libspdk_event_scheduler.so 00:02:14.373 SYMLINK libspdk_event_sock.so 00:02:14.373 SYMLINK libspdk_event_vmd.so 00:02:14.373 SYMLINK libspdk_event_iobuf.so 00:02:14.658 CC module/event/subsystems/accel/accel.o 00:02:14.917 LIB libspdk_event_accel.a 00:02:14.917 SO libspdk_event_accel.so.6.0 00:02:14.917 SYMLINK libspdk_event_accel.so 00:02:15.175 CC module/event/subsystems/bdev/bdev.o 00:02:15.175 LIB libspdk_event_bdev.a 00:02:15.175 SO libspdk_event_bdev.so.6.0 00:02:15.433 SYMLINK libspdk_event_bdev.so 00:02:15.433 CC module/event/subsystems/nbd/nbd.o 00:02:15.433 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:15.433 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:15.433 CC module/event/subsystems/ublk/ublk.o 00:02:15.433 CC module/event/subsystems/scsi/scsi.o 00:02:15.691 LIB libspdk_event_nbd.a 00:02:15.691 LIB libspdk_event_ublk.a 00:02:15.691 LIB libspdk_event_scsi.a 00:02:15.691 SO libspdk_event_nbd.so.6.0 00:02:15.691 SO libspdk_event_ublk.so.3.0 00:02:15.691 SO libspdk_event_scsi.so.6.0 00:02:15.691 SYMLINK libspdk_event_nbd.so 00:02:15.691 SYMLINK libspdk_event_ublk.so 00:02:15.691 SYMLINK libspdk_event_scsi.so 00:02:15.691 LIB libspdk_event_nvmf.a 00:02:15.691 SO libspdk_event_nvmf.so.6.0 00:02:15.949 SYMLINK libspdk_event_nvmf.so 00:02:15.949 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:15.949 CC module/event/subsystems/iscsi/iscsi.o 00:02:15.949 LIB libspdk_event_vhost_scsi.a 00:02:15.949 SO libspdk_event_vhost_scsi.so.3.0 00:02:15.949 LIB libspdk_event_iscsi.a 00:02:16.208 SO libspdk_event_iscsi.so.6.0 00:02:16.208 SYMLINK libspdk_event_vhost_scsi.so 00:02:16.208 SYMLINK libspdk_event_iscsi.so 00:02:16.208 SO libspdk.so.6.0 00:02:16.208 SYMLINK libspdk.so 00:02:16.469 CC app/trace_record/trace_record.o 00:02:16.469 CXX app/trace/trace.o 00:02:16.469 CC app/spdk_top/spdk_top.o 00:02:16.469 CC test/rpc_client/rpc_client_test.o 00:02:16.469 CC app/spdk_nvme_identify/identify.o 00:02:16.469 CC app/spdk_lspci/spdk_lspci.o 00:02:16.469 CC app/spdk_nvme_perf/perf.o 00:02:16.469 TEST_HEADER include/spdk/accel.h 00:02:16.469 CC app/spdk_nvme_discover/discovery_aer.o 00:02:16.469 TEST_HEADER include/spdk/accel_module.h 00:02:16.469 TEST_HEADER include/spdk/assert.h 00:02:16.469 TEST_HEADER include/spdk/barrier.h 00:02:16.469 TEST_HEADER include/spdk/base64.h 00:02:16.469 TEST_HEADER include/spdk/bdev.h 00:02:16.469 TEST_HEADER include/spdk/bdev_module.h 00:02:16.469 TEST_HEADER include/spdk/bdev_zone.h 00:02:16.469 TEST_HEADER include/spdk/bit_array.h 00:02:16.469 TEST_HEADER include/spdk/bit_pool.h 00:02:16.469 TEST_HEADER include/spdk/blob_bdev.h 00:02:16.469 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:16.469 TEST_HEADER include/spdk/blobfs.h 00:02:16.469 TEST_HEADER include/spdk/blob.h 00:02:16.469 TEST_HEADER include/spdk/conf.h 00:02:16.469 TEST_HEADER include/spdk/config.h 00:02:16.469 TEST_HEADER include/spdk/cpuset.h 00:02:16.469 TEST_HEADER include/spdk/crc16.h 00:02:16.469 TEST_HEADER include/spdk/crc32.h 00:02:16.469 TEST_HEADER include/spdk/crc64.h 00:02:16.469 TEST_HEADER include/spdk/dif.h 00:02:16.469 TEST_HEADER include/spdk/dma.h 00:02:16.469 TEST_HEADER include/spdk/endian.h 00:02:16.469 TEST_HEADER include/spdk/env_dpdk.h 00:02:16.469 TEST_HEADER include/spdk/env.h 00:02:16.469 TEST_HEADER include/spdk/event.h 00:02:16.469 TEST_HEADER include/spdk/fd_group.h 00:02:16.469 TEST_HEADER include/spdk/fd.h 00:02:16.469 TEST_HEADER include/spdk/file.h 00:02:16.469 TEST_HEADER include/spdk/ftl.h 00:02:16.469 TEST_HEADER include/spdk/hexlify.h 00:02:16.469 TEST_HEADER include/spdk/gpt_spec.h 00:02:16.469 TEST_HEADER include/spdk/histogram_data.h 00:02:16.469 TEST_HEADER include/spdk/idxd.h 00:02:16.469 TEST_HEADER include/spdk/idxd_spec.h 00:02:16.469 TEST_HEADER include/spdk/init.h 00:02:16.469 TEST_HEADER include/spdk/ioat.h 00:02:16.469 TEST_HEADER include/spdk/ioat_spec.h 00:02:16.469 TEST_HEADER include/spdk/iscsi_spec.h 00:02:16.469 TEST_HEADER include/spdk/json.h 00:02:16.469 TEST_HEADER include/spdk/jsonrpc.h 00:02:16.469 TEST_HEADER include/spdk/keyring.h 00:02:16.469 TEST_HEADER include/spdk/keyring_module.h 00:02:16.469 TEST_HEADER include/spdk/likely.h 00:02:16.469 TEST_HEADER include/spdk/log.h 00:02:16.469 TEST_HEADER include/spdk/memory.h 00:02:16.469 TEST_HEADER include/spdk/lvol.h 00:02:16.469 TEST_HEADER include/spdk/mmio.h 00:02:16.469 TEST_HEADER include/spdk/net.h 00:02:16.469 TEST_HEADER include/spdk/nbd.h 00:02:16.469 TEST_HEADER include/spdk/notify.h 00:02:16.469 TEST_HEADER include/spdk/nvme.h 00:02:16.469 TEST_HEADER include/spdk/nvme_intel.h 00:02:16.469 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:16.469 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:16.469 TEST_HEADER include/spdk/nvme_spec.h 00:02:16.469 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:16.469 TEST_HEADER include/spdk/nvme_zns.h 00:02:16.469 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:16.469 TEST_HEADER include/spdk/nvmf.h 00:02:16.469 TEST_HEADER include/spdk/nvmf_spec.h 00:02:16.469 TEST_HEADER include/spdk/nvmf_transport.h 00:02:16.469 TEST_HEADER include/spdk/opal_spec.h 00:02:16.469 TEST_HEADER include/spdk/opal.h 00:02:16.469 TEST_HEADER include/spdk/pci_ids.h 00:02:16.469 TEST_HEADER include/spdk/pipe.h 00:02:16.469 TEST_HEADER include/spdk/queue.h 00:02:16.469 TEST_HEADER include/spdk/reduce.h 00:02:16.469 TEST_HEADER include/spdk/rpc.h 00:02:16.469 TEST_HEADER include/spdk/scheduler.h 00:02:16.469 TEST_HEADER include/spdk/scsi.h 00:02:16.469 TEST_HEADER include/spdk/scsi_spec.h 00:02:16.469 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:16.469 TEST_HEADER include/spdk/stdinc.h 00:02:16.469 TEST_HEADER include/spdk/sock.h 00:02:16.469 TEST_HEADER include/spdk/string.h 00:02:16.469 TEST_HEADER include/spdk/thread.h 00:02:16.469 TEST_HEADER include/spdk/trace.h 00:02:16.469 TEST_HEADER include/spdk/trace_parser.h 00:02:16.469 TEST_HEADER include/spdk/tree.h 00:02:16.469 TEST_HEADER include/spdk/ublk.h 00:02:16.469 TEST_HEADER include/spdk/util.h 00:02:16.469 TEST_HEADER include/spdk/uuid.h 00:02:16.469 TEST_HEADER include/spdk/version.h 00:02:16.469 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:16.469 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:16.469 TEST_HEADER include/spdk/vhost.h 00:02:16.469 TEST_HEADER include/spdk/vmd.h 00:02:16.469 TEST_HEADER include/spdk/xor.h 00:02:16.469 CC app/spdk_dd/spdk_dd.o 00:02:16.469 TEST_HEADER include/spdk/zipf.h 00:02:16.469 CXX test/cpp_headers/accel.o 00:02:16.469 CXX test/cpp_headers/accel_module.o 00:02:16.469 CXX test/cpp_headers/assert.o 00:02:16.469 CXX test/cpp_headers/barrier.o 00:02:16.469 CXX test/cpp_headers/base64.o 00:02:16.469 CXX test/cpp_headers/bdev.o 00:02:16.469 CXX test/cpp_headers/bdev_module.o 00:02:16.469 CXX test/cpp_headers/bdev_zone.o 00:02:16.469 CXX test/cpp_headers/bit_array.o 00:02:16.469 CXX test/cpp_headers/bit_pool.o 00:02:16.469 CXX test/cpp_headers/blob_bdev.o 00:02:16.469 CXX test/cpp_headers/blobfs_bdev.o 00:02:16.469 CXX test/cpp_headers/blobfs.o 00:02:16.469 CXX test/cpp_headers/blob.o 00:02:16.469 CXX test/cpp_headers/conf.o 00:02:16.469 CXX test/cpp_headers/config.o 00:02:16.469 CXX test/cpp_headers/cpuset.o 00:02:16.469 CXX test/cpp_headers/crc16.o 00:02:16.469 CC app/iscsi_tgt/iscsi_tgt.o 00:02:16.469 CC app/nvmf_tgt/nvmf_main.o 00:02:16.469 CC app/spdk_tgt/spdk_tgt.o 00:02:16.469 CXX test/cpp_headers/crc32.o 00:02:16.469 CC examples/ioat/verify/verify.o 00:02:16.469 CC test/thread/poller_perf/poller_perf.o 00:02:16.469 CC examples/ioat/perf/perf.o 00:02:16.469 CC test/app/histogram_perf/histogram_perf.o 00:02:16.469 CC test/app/stub/stub.o 00:02:16.469 CC test/env/vtophys/vtophys.o 00:02:16.469 CC app/fio/nvme/fio_plugin.o 00:02:16.469 CC test/env/memory/memory_ut.o 00:02:16.469 CC test/app/jsoncat/jsoncat.o 00:02:16.469 CC examples/util/zipf/zipf.o 00:02:16.733 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:16.733 CC test/env/pci/pci_ut.o 00:02:16.733 CC test/dma/test_dma/test_dma.o 00:02:16.733 CC test/app/bdev_svc/bdev_svc.o 00:02:16.733 CC app/fio/bdev/fio_plugin.o 00:02:16.733 LINK spdk_lspci 00:02:16.733 CC test/env/mem_callbacks/mem_callbacks.o 00:02:16.733 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:16.733 LINK rpc_client_test 00:02:16.996 LINK spdk_nvme_discover 00:02:16.996 LINK interrupt_tgt 00:02:16.996 LINK poller_perf 00:02:16.996 CXX test/cpp_headers/crc64.o 00:02:16.996 LINK jsoncat 00:02:16.997 LINK vtophys 00:02:16.997 CXX test/cpp_headers/dif.o 00:02:16.997 LINK zipf 00:02:16.997 CXX test/cpp_headers/dma.o 00:02:16.997 LINK histogram_perf 00:02:16.997 CXX test/cpp_headers/endian.o 00:02:16.997 CXX test/cpp_headers/env_dpdk.o 00:02:16.997 CXX test/cpp_headers/env.o 00:02:16.997 CXX test/cpp_headers/event.o 00:02:16.997 CXX test/cpp_headers/fd_group.o 00:02:16.997 LINK env_dpdk_post_init 00:02:16.997 LINK spdk_trace_record 00:02:16.997 CXX test/cpp_headers/fd.o 00:02:16.997 CXX test/cpp_headers/file.o 00:02:16.997 CXX test/cpp_headers/ftl.o 00:02:16.997 CXX test/cpp_headers/gpt_spec.o 00:02:16.997 LINK nvmf_tgt 00:02:16.997 LINK stub 00:02:16.997 LINK iscsi_tgt 00:02:16.997 CXX test/cpp_headers/hexlify.o 00:02:16.997 CXX test/cpp_headers/histogram_data.o 00:02:16.997 CXX test/cpp_headers/idxd.o 00:02:16.997 LINK spdk_tgt 00:02:16.997 CXX test/cpp_headers/idxd_spec.o 00:02:16.997 LINK verify 00:02:16.997 LINK bdev_svc 00:02:16.997 LINK ioat_perf 00:02:16.997 CXX test/cpp_headers/init.o 00:02:16.997 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:16.997 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:17.265 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:17.265 CXX test/cpp_headers/ioat.o 00:02:17.265 CXX test/cpp_headers/ioat_spec.o 00:02:17.265 CXX test/cpp_headers/iscsi_spec.o 00:02:17.265 LINK spdk_dd 00:02:17.265 CXX test/cpp_headers/json.o 00:02:17.265 CXX test/cpp_headers/jsonrpc.o 00:02:17.265 CXX test/cpp_headers/keyring.o 00:02:17.265 CXX test/cpp_headers/keyring_module.o 00:02:17.265 LINK spdk_trace 00:02:17.265 CXX test/cpp_headers/likely.o 00:02:17.265 CXX test/cpp_headers/log.o 00:02:17.265 CXX test/cpp_headers/lvol.o 00:02:17.265 CXX test/cpp_headers/memory.o 00:02:17.265 CXX test/cpp_headers/mmio.o 00:02:17.265 CXX test/cpp_headers/nbd.o 00:02:17.265 CXX test/cpp_headers/net.o 00:02:17.265 CXX test/cpp_headers/notify.o 00:02:17.265 CXX test/cpp_headers/nvme.o 00:02:17.265 CXX test/cpp_headers/nvme_intel.o 00:02:17.265 CXX test/cpp_headers/nvme_ocssd.o 00:02:17.525 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:17.525 LINK pci_ut 00:02:17.525 LINK test_dma 00:02:17.525 CXX test/cpp_headers/nvme_spec.o 00:02:17.525 CXX test/cpp_headers/nvme_zns.o 00:02:17.525 CXX test/cpp_headers/nvmf_cmd.o 00:02:17.525 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:17.525 CXX test/cpp_headers/nvmf.o 00:02:17.525 CXX test/cpp_headers/nvmf_spec.o 00:02:17.525 CXX test/cpp_headers/nvmf_transport.o 00:02:17.525 CXX test/cpp_headers/opal.o 00:02:17.525 CC test/event/event_perf/event_perf.o 00:02:17.525 CC test/event/reactor/reactor.o 00:02:17.525 CC test/event/reactor_perf/reactor_perf.o 00:02:17.525 CXX test/cpp_headers/opal_spec.o 00:02:17.525 CXX test/cpp_headers/pci_ids.o 00:02:17.525 CC test/event/app_repeat/app_repeat.o 00:02:17.525 CXX test/cpp_headers/pipe.o 00:02:17.525 CXX test/cpp_headers/queue.o 00:02:17.525 LINK nvme_fuzz 00:02:17.525 CXX test/cpp_headers/reduce.o 00:02:17.525 CC examples/idxd/perf/perf.o 00:02:17.794 LINK spdk_nvme 00:02:17.794 CXX test/cpp_headers/rpc.o 00:02:17.794 CC examples/vmd/lsvmd/lsvmd.o 00:02:17.794 CC examples/sock/hello_world/hello_sock.o 00:02:17.794 CC examples/vmd/led/led.o 00:02:17.794 CC test/event/scheduler/scheduler.o 00:02:17.794 CXX test/cpp_headers/scheduler.o 00:02:17.794 CXX test/cpp_headers/scsi.o 00:02:17.794 LINK spdk_bdev 00:02:17.794 CC examples/thread/thread/thread_ex.o 00:02:17.794 CXX test/cpp_headers/scsi_spec.o 00:02:17.794 CXX test/cpp_headers/sock.o 00:02:17.794 CXX test/cpp_headers/stdinc.o 00:02:17.794 CXX test/cpp_headers/string.o 00:02:17.794 CXX test/cpp_headers/thread.o 00:02:17.794 CXX test/cpp_headers/trace.o 00:02:17.794 CXX test/cpp_headers/trace_parser.o 00:02:17.794 CXX test/cpp_headers/tree.o 00:02:17.794 CXX test/cpp_headers/ublk.o 00:02:17.794 CXX test/cpp_headers/util.o 00:02:17.794 CXX test/cpp_headers/uuid.o 00:02:17.794 CC app/vhost/vhost.o 00:02:17.794 LINK event_perf 00:02:17.794 LINK reactor 00:02:17.794 LINK reactor_perf 00:02:17.794 CXX test/cpp_headers/version.o 00:02:17.794 CXX test/cpp_headers/vfio_user_pci.o 00:02:17.794 CXX test/cpp_headers/vfio_user_spec.o 00:02:18.061 CXX test/cpp_headers/vhost.o 00:02:18.061 CXX test/cpp_headers/vmd.o 00:02:18.061 LINK lsvmd 00:02:18.061 CXX test/cpp_headers/xor.o 00:02:18.061 CXX test/cpp_headers/zipf.o 00:02:18.061 LINK app_repeat 00:02:18.061 LINK vhost_fuzz 00:02:18.061 LINK spdk_nvme_perf 00:02:18.061 LINK led 00:02:18.061 LINK mem_callbacks 00:02:18.061 LINK spdk_nvme_identify 00:02:18.061 LINK spdk_top 00:02:18.061 LINK scheduler 00:02:18.061 LINK hello_sock 00:02:18.061 CC test/nvme/sgl/sgl.o 00:02:18.061 CC test/nvme/err_injection/err_injection.o 00:02:18.061 CC test/nvme/startup/startup.o 00:02:18.061 CC test/nvme/e2edp/nvme_dp.o 00:02:18.061 CC test/nvme/overhead/overhead.o 00:02:18.061 CC test/nvme/aer/aer.o 00:02:18.061 CC test/nvme/reset/reset.o 00:02:18.320 CC test/nvme/reserve/reserve.o 00:02:18.320 CC test/nvme/simple_copy/simple_copy.o 00:02:18.320 CC test/accel/dif/dif.o 00:02:18.320 LINK thread 00:02:18.320 CC test/blobfs/mkfs/mkfs.o 00:02:18.320 CC test/nvme/connect_stress/connect_stress.o 00:02:18.320 LINK vhost 00:02:18.320 CC test/nvme/boot_partition/boot_partition.o 00:02:18.320 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:18.320 CC test/nvme/fused_ordering/fused_ordering.o 00:02:18.320 CC test/nvme/compliance/nvme_compliance.o 00:02:18.320 CC test/nvme/fdp/fdp.o 00:02:18.320 CC test/nvme/cuse/cuse.o 00:02:18.320 LINK idxd_perf 00:02:18.320 CC test/lvol/esnap/esnap.o 00:02:18.320 LINK startup 00:02:18.320 LINK err_injection 00:02:18.579 LINK boot_partition 00:02:18.579 LINK doorbell_aers 00:02:18.579 LINK connect_stress 00:02:18.579 LINK fused_ordering 00:02:18.579 LINK overhead 00:02:18.579 LINK simple_copy 00:02:18.579 LINK mkfs 00:02:18.579 LINK reserve 00:02:18.579 LINK reset 00:02:18.579 CC examples/nvme/arbitration/arbitration.o 00:02:18.579 CC examples/nvme/hotplug/hotplug.o 00:02:18.579 CC examples/nvme/abort/abort.o 00:02:18.579 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:18.579 CC examples/nvme/reconnect/reconnect.o 00:02:18.579 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:18.579 CC examples/nvme/hello_world/hello_world.o 00:02:18.579 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:18.579 LINK aer 00:02:18.579 LINK nvme_dp 00:02:18.579 LINK fdp 00:02:18.579 LINK nvme_compliance 00:02:18.579 LINK sgl 00:02:18.838 LINK memory_ut 00:02:18.838 LINK dif 00:02:18.838 CC examples/accel/perf/accel_perf.o 00:02:18.838 LINK pmr_persistence 00:02:18.838 LINK cmb_copy 00:02:18.838 CC examples/blob/hello_world/hello_blob.o 00:02:18.838 CC examples/blob/cli/blobcli.o 00:02:18.838 LINK hello_world 00:02:19.097 LINK hotplug 00:02:19.097 LINK arbitration 00:02:19.097 LINK reconnect 00:02:19.097 LINK hello_blob 00:02:19.097 LINK abort 00:02:19.097 CC test/bdev/bdevio/bdevio.o 00:02:19.097 LINK nvme_manage 00:02:19.355 LINK accel_perf 00:02:19.355 LINK blobcli 00:02:19.355 LINK iscsi_fuzz 00:02:19.612 LINK bdevio 00:02:19.612 CC examples/bdev/hello_world/hello_bdev.o 00:02:19.612 CC examples/bdev/bdevperf/bdevperf.o 00:02:19.870 LINK cuse 00:02:19.870 LINK hello_bdev 00:02:20.438 LINK bdevperf 00:02:20.696 CC examples/nvmf/nvmf/nvmf.o 00:02:21.264 LINK nvmf 00:02:23.800 LINK esnap 00:02:23.800 00:02:23.800 real 0m49.480s 00:02:23.800 user 10m6.910s 00:02:23.800 sys 2m26.814s 00:02:23.800 12:03:16 make -- common/autotest_common.sh@1126 -- $ xtrace_disable 00:02:23.800 12:03:16 make -- common/autotest_common.sh@10 -- $ set +x 00:02:23.800 ************************************ 00:02:23.800 END TEST make 00:02:23.800 ************************************ 00:02:23.800 12:03:16 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:23.800 12:03:16 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:23.800 12:03:16 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:23.800 12:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:23.800 12:03:16 -- pm/common@44 -- $ pid=2665495 00:02:23.800 12:03:16 -- pm/common@50 -- $ kill -TERM 2665495 00:02:23.800 12:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:23.800 12:03:16 -- pm/common@44 -- $ pid=2665497 00:02:23.800 12:03:16 -- pm/common@50 -- $ kill -TERM 2665497 00:02:23.800 12:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:23.800 12:03:16 -- pm/common@44 -- $ pid=2665499 00:02:23.800 12:03:16 -- pm/common@50 -- $ kill -TERM 2665499 00:02:23.800 12:03:16 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:16 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:23.800 12:03:16 -- pm/common@44 -- $ pid=2665527 00:02:23.800 12:03:16 -- pm/common@50 -- $ sudo -E kill -TERM 2665527 00:02:23.800 12:03:17 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:23.800 12:03:17 -- nvmf/common.sh@7 -- # uname -s 00:02:23.800 12:03:17 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:23.800 12:03:17 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:23.800 12:03:17 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:23.800 12:03:17 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:23.800 12:03:17 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:23.800 12:03:17 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:23.800 12:03:17 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:23.800 12:03:17 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:23.800 12:03:17 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:23.800 12:03:17 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:23.800 12:03:17 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:02:23.800 12:03:17 -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:02:23.800 12:03:17 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:23.800 12:03:17 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:23.800 12:03:17 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:23.800 12:03:17 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:23.800 12:03:17 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:23.800 12:03:17 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:23.800 12:03:17 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:23.800 12:03:17 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:23.800 12:03:17 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.800 12:03:17 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.800 12:03:17 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.800 12:03:17 -- paths/export.sh@5 -- # export PATH 00:02:23.800 12:03:17 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:23.800 12:03:17 -- nvmf/common.sh@47 -- # : 0 00:02:23.800 12:03:17 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:23.800 12:03:17 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:23.800 12:03:17 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:23.800 12:03:17 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:23.800 12:03:17 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:23.800 12:03:17 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:23.800 12:03:17 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:23.800 12:03:17 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:23.800 12:03:17 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:23.800 12:03:17 -- spdk/autotest.sh@32 -- # uname -s 00:02:23.800 12:03:17 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:23.800 12:03:17 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:23.800 12:03:17 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.800 12:03:17 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:23.800 12:03:17 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:23.800 12:03:17 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:23.800 12:03:17 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:23.800 12:03:17 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:23.800 12:03:17 -- spdk/autotest.sh@48 -- # udevadm_pid=2721640 00:02:23.800 12:03:17 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:23.800 12:03:17 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:23.800 12:03:17 -- pm/common@17 -- # local monitor 00:02:23.800 12:03:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:17 -- pm/common@21 -- # date +%s 00:02:23.800 12:03:17 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:23.800 12:03:17 -- pm/common@21 -- # date +%s 00:02:23.800 12:03:17 -- pm/common@25 -- # sleep 1 00:02:23.800 12:03:17 -- pm/common@21 -- # date +%s 00:02:23.800 12:03:17 -- pm/common@21 -- # date +%s 00:02:23.800 12:03:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721988197 00:02:23.800 12:03:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721988197 00:02:23.800 12:03:17 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721988197 00:02:23.800 12:03:17 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721988197 00:02:24.060 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721988197_collect-vmstat.pm.log 00:02:24.060 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721988197_collect-cpu-load.pm.log 00:02:24.060 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721988197_collect-cpu-temp.pm.log 00:02:24.060 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721988197_collect-bmc-pm.bmc.pm.log 00:02:25.011 12:03:18 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:25.011 12:03:18 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:25.011 12:03:18 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:25.011 12:03:18 -- common/autotest_common.sh@10 -- # set +x 00:02:25.011 12:03:18 -- spdk/autotest.sh@59 -- # create_test_list 00:02:25.011 12:03:18 -- common/autotest_common.sh@748 -- # xtrace_disable 00:02:25.011 12:03:18 -- common/autotest_common.sh@10 -- # set +x 00:02:25.011 12:03:18 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:25.011 12:03:18 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.011 12:03:18 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.011 12:03:18 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:25.011 12:03:18 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:25.011 12:03:18 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:25.011 12:03:18 -- common/autotest_common.sh@1455 -- # uname 00:02:25.011 12:03:18 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:25.011 12:03:18 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:25.011 12:03:18 -- common/autotest_common.sh@1475 -- # uname 00:02:25.011 12:03:18 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:25.011 12:03:18 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:25.011 12:03:18 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:25.011 12:03:18 -- spdk/autotest.sh@72 -- # hash lcov 00:02:25.011 12:03:18 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:25.011 12:03:18 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:25.011 --rc lcov_branch_coverage=1 00:02:25.011 --rc lcov_function_coverage=1 00:02:25.011 --rc genhtml_branch_coverage=1 00:02:25.011 --rc genhtml_function_coverage=1 00:02:25.011 --rc genhtml_legend=1 00:02:25.011 --rc geninfo_all_blocks=1 00:02:25.011 ' 00:02:25.011 12:03:18 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:25.011 --rc lcov_branch_coverage=1 00:02:25.011 --rc lcov_function_coverage=1 00:02:25.011 --rc genhtml_branch_coverage=1 00:02:25.011 --rc genhtml_function_coverage=1 00:02:25.011 --rc genhtml_legend=1 00:02:25.011 --rc geninfo_all_blocks=1 00:02:25.011 ' 00:02:25.011 12:03:18 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:25.011 --rc lcov_branch_coverage=1 00:02:25.011 --rc lcov_function_coverage=1 00:02:25.011 --rc genhtml_branch_coverage=1 00:02:25.011 --rc genhtml_function_coverage=1 00:02:25.011 --rc genhtml_legend=1 00:02:25.011 --rc geninfo_all_blocks=1 00:02:25.011 --no-external' 00:02:25.011 12:03:18 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:25.011 --rc lcov_branch_coverage=1 00:02:25.011 --rc lcov_function_coverage=1 00:02:25.011 --rc genhtml_branch_coverage=1 00:02:25.011 --rc genhtml_function_coverage=1 00:02:25.011 --rc genhtml_legend=1 00:02:25.011 --rc geninfo_all_blocks=1 00:02:25.011 --no-external' 00:02:25.011 12:03:18 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:25.011 lcov: LCOV version 1.14 00:02:25.011 12:03:18 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:26.909 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:26.909 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:26.910 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:26.910 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:26.911 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:26.911 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:41.773 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:41.773 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:59.892 12:03:51 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:59.892 12:03:51 -- common/autotest_common.sh@724 -- # xtrace_disable 00:02:59.892 12:03:51 -- common/autotest_common.sh@10 -- # set +x 00:02:59.892 12:03:51 -- spdk/autotest.sh@91 -- # rm -f 00:02:59.892 12:03:51 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:59.892 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:02:59.892 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:02:59.892 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:02:59.892 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:02:59.892 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:02:59.892 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:02:59.892 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:02:59.892 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:02:59.892 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:02:59.892 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:02:59.892 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:02:59.892 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:02:59.892 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:02:59.892 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:02:59.892 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:02:59.892 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:02:59.892 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:02:59.892 12:03:52 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:59.892 12:03:52 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:59.892 12:03:52 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:59.892 12:03:52 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:59.892 12:03:52 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:59.892 12:03:52 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:59.892 12:03:52 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:59.892 12:03:52 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:59.892 12:03:52 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:59.892 12:03:52 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:59.892 12:03:52 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:59.892 12:03:52 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:59.892 12:03:52 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:59.892 12:03:52 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:59.892 12:03:52 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:59.892 No valid GPT data, bailing 00:02:59.892 12:03:52 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:59.892 12:03:52 -- scripts/common.sh@391 -- # pt= 00:02:59.892 12:03:52 -- scripts/common.sh@392 -- # return 1 00:02:59.892 12:03:52 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:59.892 1+0 records in 00:02:59.892 1+0 records out 00:02:59.892 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00169032 s, 620 MB/s 00:02:59.892 12:03:52 -- spdk/autotest.sh@118 -- # sync 00:02:59.892 12:03:52 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:59.892 12:03:52 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:59.892 12:03:52 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:01.796 12:03:54 -- spdk/autotest.sh@124 -- # uname -s 00:03:01.796 12:03:54 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:01.796 12:03:54 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.796 12:03:54 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:01.796 12:03:54 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:01.796 12:03:54 -- common/autotest_common.sh@10 -- # set +x 00:03:01.796 ************************************ 00:03:01.796 START TEST setup.sh 00:03:01.796 ************************************ 00:03:01.796 12:03:54 setup.sh -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:01.796 * Looking for test storage... 00:03:01.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.796 12:03:54 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:01.796 12:03:54 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:01.796 12:03:54 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:01.796 12:03:54 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:01.796 12:03:54 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:01.796 12:03:54 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:01.796 ************************************ 00:03:01.796 START TEST acl 00:03:01.796 ************************************ 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:01.796 * Looking for test storage... 00:03:01.796 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:01.796 12:03:54 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:01.796 12:03:54 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:01.796 12:03:54 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:01.796 12:03:54 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:01.796 12:03:54 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:01.796 12:03:54 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:01.796 12:03:54 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:01.796 12:03:54 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:01.796 12:03:54 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:03.173 12:03:56 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:03.173 12:03:56 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:03.173 12:03:56 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:03.173 12:03:56 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:03.173 12:03:56 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:03.173 12:03:56 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:04.109 Hugepages 00:03:04.109 node hugesize free / total 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.109 00:03:04.109 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.109 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:04.110 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:88:00.0 == *:*:*.* ]] 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:04.370 12:03:57 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:04.370 12:03:57 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:04.370 12:03:57 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:04.370 12:03:57 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:04.370 ************************************ 00:03:04.370 START TEST denied 00:03:04.370 ************************************ 00:03:04.370 12:03:57 setup.sh.acl.denied -- common/autotest_common.sh@1125 -- # denied 00:03:04.370 12:03:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:88:00.0' 00:03:04.370 12:03:57 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:04.370 12:03:57 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:88:00.0' 00:03:04.370 12:03:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.370 12:03:57 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:05.748 0000:88:00.0 (8086 0a54): Skipping denied controller at 0000:88:00.0 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:88:00.0 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:88:00.0 ]] 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:88:00.0/driver 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:05.748 12:03:58 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:08.282 00:03:08.282 real 0m3.733s 00:03:08.282 user 0m1.032s 00:03:08.282 sys 0m1.770s 00:03:08.282 12:04:01 setup.sh.acl.denied -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:08.282 12:04:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:08.282 ************************************ 00:03:08.282 END TEST denied 00:03:08.282 ************************************ 00:03:08.282 12:04:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:08.282 12:04:01 setup.sh.acl -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:08.282 12:04:01 setup.sh.acl -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:08.282 12:04:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:08.282 ************************************ 00:03:08.282 START TEST allowed 00:03:08.282 ************************************ 00:03:08.282 12:04:01 setup.sh.acl.allowed -- common/autotest_common.sh@1125 -- # allowed 00:03:08.282 12:04:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:88:00.0 00:03:08.282 12:04:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:08.282 12:04:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:88:00.0 .*: nvme -> .*' 00:03:08.282 12:04:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:08.282 12:04:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:10.816 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:10.816 12:04:03 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:10.816 12:04:03 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:10.816 12:04:03 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:10.816 12:04:03 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:10.816 12:04:03 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:11.753 00:03:11.753 real 0m3.736s 00:03:11.753 user 0m0.974s 00:03:11.753 sys 0m1.618s 00:03:11.753 12:04:04 setup.sh.acl.allowed -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.753 12:04:04 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:11.753 ************************************ 00:03:11.753 END TEST allowed 00:03:11.753 ************************************ 00:03:11.753 00:03:11.753 real 0m10.249s 00:03:11.753 user 0m3.090s 00:03:11.753 sys 0m5.150s 00:03:11.753 12:04:04 setup.sh.acl -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:11.753 12:04:04 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:11.753 ************************************ 00:03:11.753 END TEST acl 00:03:11.753 ************************************ 00:03:11.753 12:04:04 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:11.753 12:04:04 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:11.753 12:04:04 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:11.753 12:04:04 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:11.753 ************************************ 00:03:11.753 START TEST hugepages 00:03:11.753 ************************************ 00:03:11.753 12:04:04 setup.sh.hugepages -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:12.014 * Looking for test storage... 00:03:12.014 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 43705448 kB' 'MemAvailable: 47206680 kB' 'Buffers: 2704 kB' 'Cached: 10274928 kB' 'SwapCached: 0 kB' 'Active: 7274140 kB' 'Inactive: 3506192 kB' 'Active(anon): 6878644 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 505924 kB' 'Mapped: 217904 kB' 'Shmem: 6375944 kB' 'KReclaimable: 186764 kB' 'Slab: 554416 kB' 'SReclaimable: 186764 kB' 'SUnreclaim: 367652 kB' 'KernelStack: 12768 kB' 'PageTables: 8432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 36562308 kB' 'Committed_AS: 7964372 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196000 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.014 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.015 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:12.016 12:04:05 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:12.016 12:04:05 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:12.016 12:04:05 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:12.016 12:04:05 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:12.016 ************************************ 00:03:12.016 START TEST default_setup 00:03:12.016 ************************************ 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1125 -- # default_setup 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:12.016 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:12.017 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:12.017 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:12.017 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:12.017 12:04:05 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:12.017 12:04:05 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:12.017 12:04:05 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:12.952 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.213 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:13.213 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:14.155 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.155 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45840896 kB' 'MemAvailable: 49342096 kB' 'Buffers: 2704 kB' 'Cached: 10275024 kB' 'SwapCached: 0 kB' 'Active: 7293180 kB' 'Inactive: 3506192 kB' 'Active(anon): 6897684 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524904 kB' 'Mapped: 218016 kB' 'Shmem: 6376040 kB' 'KReclaimable: 186700 kB' 'Slab: 553644 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 366944 kB' 'KernelStack: 12736 kB' 'PageTables: 7988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985512 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.156 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.157 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45842752 kB' 'MemAvailable: 49343952 kB' 'Buffers: 2704 kB' 'Cached: 10275028 kB' 'SwapCached: 0 kB' 'Active: 7292204 kB' 'Inactive: 3506192 kB' 'Active(anon): 6896708 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523988 kB' 'Mapped: 218012 kB' 'Shmem: 6376044 kB' 'KReclaimable: 186700 kB' 'Slab: 553820 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367120 kB' 'KernelStack: 12800 kB' 'PageTables: 8276 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985532 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196064 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.158 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.159 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45842752 kB' 'MemAvailable: 49343952 kB' 'Buffers: 2704 kB' 'Cached: 10275044 kB' 'SwapCached: 0 kB' 'Active: 7292076 kB' 'Inactive: 3506192 kB' 'Active(anon): 6896580 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523840 kB' 'Mapped: 217936 kB' 'Shmem: 6376060 kB' 'KReclaimable: 186700 kB' 'Slab: 553832 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367132 kB' 'KernelStack: 12800 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985552 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.160 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.161 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.422 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.423 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:14.424 nr_hugepages=1024 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:14.424 resv_hugepages=0 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:14.424 surplus_hugepages=0 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:14.424 anon_hugepages=0 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45842532 kB' 'MemAvailable: 49343732 kB' 'Buffers: 2704 kB' 'Cached: 10275068 kB' 'SwapCached: 0 kB' 'Active: 7292092 kB' 'Inactive: 3506192 kB' 'Active(anon): 6896596 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523800 kB' 'Mapped: 217936 kB' 'Shmem: 6376084 kB' 'KReclaimable: 186700 kB' 'Slab: 553832 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367132 kB' 'KernelStack: 12784 kB' 'PageTables: 8216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985576 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.424 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.425 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21166852 kB' 'MemUsed: 11710088 kB' 'SwapCached: 0 kB' 'Active: 5114396 kB' 'Inactive: 3356980 kB' 'Active(anon): 4842464 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361296 kB' 'Mapped: 96620 kB' 'AnonPages: 113312 kB' 'Shmem: 4732384 kB' 'KernelStack: 6760 kB' 'PageTables: 3504 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 305020 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220540 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.426 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:14.427 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:14.428 node0=1024 expecting 1024 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:14.428 00:03:14.428 real 0m2.352s 00:03:14.428 user 0m0.620s 00:03:14.428 sys 0m0.853s 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:14.428 12:04:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:14.428 ************************************ 00:03:14.428 END TEST default_setup 00:03:14.428 ************************************ 00:03:14.428 12:04:07 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:14.428 12:04:07 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:14.428 12:04:07 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:14.428 12:04:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:14.428 ************************************ 00:03:14.428 START TEST per_node_1G_alloc 00:03:14.428 ************************************ 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1125 -- # per_node_1G_alloc 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.428 12:04:07 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:15.367 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.367 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:15.367 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.367 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.367 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.367 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.367 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.367 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.367 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.367 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:15.367 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:15.367 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:15.367 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:15.367 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:15.367 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:15.367 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:15.367 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.631 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45853064 kB' 'MemAvailable: 49354264 kB' 'Buffers: 2704 kB' 'Cached: 10275136 kB' 'SwapCached: 0 kB' 'Active: 7296532 kB' 'Inactive: 3506192 kB' 'Active(anon): 6901036 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 528064 kB' 'Mapped: 218436 kB' 'Shmem: 6376152 kB' 'KReclaimable: 186700 kB' 'Slab: 553732 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367032 kB' 'KernelStack: 12800 kB' 'PageTables: 8184 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7990544 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196048 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.632 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45852584 kB' 'MemAvailable: 49353784 kB' 'Buffers: 2704 kB' 'Cached: 10275140 kB' 'SwapCached: 0 kB' 'Active: 7298032 kB' 'Inactive: 3506192 kB' 'Active(anon): 6902536 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529584 kB' 'Mapped: 218736 kB' 'Shmem: 6376156 kB' 'KReclaimable: 186700 kB' 'Slab: 553732 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367032 kB' 'KernelStack: 12832 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7991764 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.633 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.634 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45857724 kB' 'MemAvailable: 49358924 kB' 'Buffers: 2704 kB' 'Cached: 10275140 kB' 'SwapCached: 0 kB' 'Active: 7293752 kB' 'Inactive: 3506192 kB' 'Active(anon): 6898256 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525304 kB' 'Mapped: 218736 kB' 'Shmem: 6376156 kB' 'KReclaimable: 186700 kB' 'Slab: 553792 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367092 kB' 'KernelStack: 12832 kB' 'PageTables: 8232 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7988340 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196032 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.635 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.636 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:15.637 nr_hugepages=1024 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:15.637 resv_hugepages=0 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:15.637 surplus_hugepages=0 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:15.637 anon_hugepages=0 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:15.637 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45851768 kB' 'MemAvailable: 49352968 kB' 'Buffers: 2704 kB' 'Cached: 10275180 kB' 'SwapCached: 0 kB' 'Active: 7297688 kB' 'Inactive: 3506192 kB' 'Active(anon): 6902192 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 529192 kB' 'Mapped: 218384 kB' 'Shmem: 6376196 kB' 'KReclaimable: 186700 kB' 'Slab: 553784 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 367084 kB' 'KernelStack: 12848 kB' 'PageTables: 8260 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7991808 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196052 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.638 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:15.639 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22224988 kB' 'MemUsed: 10651952 kB' 'SwapCached: 0 kB' 'Active: 5114060 kB' 'Inactive: 3356980 kB' 'Active(anon): 4842128 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361300 kB' 'Mapped: 96896 kB' 'AnonPages: 112860 kB' 'Shmem: 4732388 kB' 'KernelStack: 6728 kB' 'PageTables: 3436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 305096 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220616 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.640 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.930 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23627992 kB' 'MemUsed: 4036780 kB' 'SwapCached: 0 kB' 'Active: 2178256 kB' 'Inactive: 149212 kB' 'Active(anon): 2054692 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 149212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1916624 kB' 'Mapped: 121316 kB' 'AnonPages: 410952 kB' 'Shmem: 1643848 kB' 'KernelStack: 6088 kB' 'PageTables: 4748 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102220 kB' 'Slab: 248688 kB' 'SReclaimable: 102220 kB' 'SUnreclaim: 146468 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.931 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:15.932 node0=512 expecting 512 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:15.932 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:15.932 node1=512 expecting 512 00:03:15.933 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:15.933 00:03:15.933 real 0m1.393s 00:03:15.933 user 0m0.594s 00:03:15.933 sys 0m0.761s 00:03:15.933 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:15.933 12:04:08 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:15.933 ************************************ 00:03:15.933 END TEST per_node_1G_alloc 00:03:15.933 ************************************ 00:03:15.933 12:04:08 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:15.933 12:04:08 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:15.933 12:04:08 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:15.933 12:04:08 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:15.933 ************************************ 00:03:15.933 START TEST even_2G_alloc 00:03:15.933 ************************************ 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1125 -- # even_2G_alloc 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:15.933 12:04:08 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:16.871 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.871 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:16.871 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.871 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.871 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.871 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.871 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.871 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.871 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:16.871 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:16.871 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:16.871 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:16.871 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:16.871 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:16.871 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:16.871 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:16.871 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45847760 kB' 'MemAvailable: 49348960 kB' 'Buffers: 2704 kB' 'Cached: 10275272 kB' 'SwapCached: 0 kB' 'Active: 7293224 kB' 'Inactive: 3506192 kB' 'Active(anon): 6897728 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524188 kB' 'Mapped: 217964 kB' 'Shmem: 6376288 kB' 'KReclaimable: 186700 kB' 'Slab: 553616 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 366916 kB' 'KernelStack: 12896 kB' 'PageTables: 8364 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985892 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.138 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45851560 kB' 'MemAvailable: 49352760 kB' 'Buffers: 2704 kB' 'Cached: 10275276 kB' 'SwapCached: 0 kB' 'Active: 7292736 kB' 'Inactive: 3506192 kB' 'Active(anon): 6897240 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524172 kB' 'Mapped: 217964 kB' 'Shmem: 6376292 kB' 'KReclaimable: 186700 kB' 'Slab: 553572 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 366872 kB' 'KernelStack: 12848 kB' 'PageTables: 8200 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.139 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45851388 kB' 'MemAvailable: 49352588 kB' 'Buffers: 2704 kB' 'Cached: 10275276 kB' 'SwapCached: 0 kB' 'Active: 7292380 kB' 'Inactive: 3506192 kB' 'Active(anon): 6896884 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 523816 kB' 'Mapped: 217964 kB' 'Shmem: 6376292 kB' 'KReclaimable: 186700 kB' 'Slab: 553676 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12864 kB' 'PageTables: 8164 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985932 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.140 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:17.141 nr_hugepages=1024 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:17.141 resv_hugepages=0 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:17.141 surplus_hugepages=0 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:17.141 anon_hugepages=0 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45851388 kB' 'MemAvailable: 49352588 kB' 'Buffers: 2704 kB' 'Cached: 10275316 kB' 'SwapCached: 0 kB' 'Active: 7292772 kB' 'Inactive: 3506192 kB' 'Active(anon): 6897276 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 524180 kB' 'Mapped: 217964 kB' 'Shmem: 6376332 kB' 'KReclaimable: 186700 kB' 'Slab: 553676 kB' 'SReclaimable: 186700 kB' 'SUnreclaim: 366976 kB' 'KernelStack: 12896 kB' 'PageTables: 8268 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7985956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196144 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.141 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.142 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22227400 kB' 'MemUsed: 10649540 kB' 'SwapCached: 0 kB' 'Active: 5114100 kB' 'Inactive: 3356980 kB' 'Active(anon): 4842168 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361388 kB' 'Mapped: 96648 kB' 'AnonPages: 112880 kB' 'Shmem: 4732476 kB' 'KernelStack: 6728 kB' 'PageTables: 3372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 305028 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.143 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23624624 kB' 'MemUsed: 4040148 kB' 'SwapCached: 0 kB' 'Active: 2178672 kB' 'Inactive: 149212 kB' 'Active(anon): 2055108 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 149212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1916652 kB' 'Mapped: 121316 kB' 'AnonPages: 411288 kB' 'Shmem: 1643876 kB' 'KernelStack: 6168 kB' 'PageTables: 4896 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102220 kB' 'Slab: 248648 kB' 'SReclaimable: 102220 kB' 'SUnreclaim: 146428 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:17.144 node0=512 expecting 512 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:17.144 node1=512 expecting 512 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:17.144 00:03:17.144 real 0m1.322s 00:03:17.144 user 0m0.559s 00:03:17.144 sys 0m0.723s 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:17.144 12:04:10 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:17.144 ************************************ 00:03:17.144 END TEST even_2G_alloc 00:03:17.144 ************************************ 00:03:17.144 12:04:10 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:17.144 12:04:10 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:17.144 12:04:10 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:17.144 12:04:10 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:17.144 ************************************ 00:03:17.144 START TEST odd_alloc 00:03:17.144 ************************************ 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1125 -- # odd_alloc 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:17.144 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:17.145 12:04:10 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:18.528 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.528 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:18.528 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.528 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.528 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.528 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.528 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.528 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.528 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.528 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:18.528 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:18.528 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:18.528 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:18.528 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:18.528 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:18.528 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:18.528 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45838912 kB' 'MemAvailable: 49340080 kB' 'Buffers: 2704 kB' 'Cached: 10275400 kB' 'SwapCached: 0 kB' 'Active: 7289936 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894440 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521288 kB' 'Mapped: 217264 kB' 'Shmem: 6376416 kB' 'KReclaimable: 186636 kB' 'Slab: 553680 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367044 kB' 'KernelStack: 12864 kB' 'PageTables: 8060 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7970584 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196128 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.528 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.529 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45838420 kB' 'MemAvailable: 49339588 kB' 'Buffers: 2704 kB' 'Cached: 10275404 kB' 'SwapCached: 0 kB' 'Active: 7289632 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894136 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520948 kB' 'Mapped: 217208 kB' 'Shmem: 6376420 kB' 'KReclaimable: 186636 kB' 'Slab: 553656 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367020 kB' 'KernelStack: 12816 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7970604 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.530 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.531 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45838528 kB' 'MemAvailable: 49339696 kB' 'Buffers: 2704 kB' 'Cached: 10275412 kB' 'SwapCached: 0 kB' 'Active: 7289636 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894140 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521000 kB' 'Mapped: 217208 kB' 'Shmem: 6376428 kB' 'KReclaimable: 186636 kB' 'Slab: 553656 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367020 kB' 'KernelStack: 12848 kB' 'PageTables: 7984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7970628 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196080 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.532 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.533 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:18.534 nr_hugepages=1025 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:18.534 resv_hugepages=0 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:18.534 surplus_hugepages=0 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:18.534 anon_hugepages=0 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45841996 kB' 'MemAvailable: 49343164 kB' 'Buffers: 2704 kB' 'Cached: 10275432 kB' 'SwapCached: 0 kB' 'Active: 7290536 kB' 'Inactive: 3506192 kB' 'Active(anon): 6895040 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521968 kB' 'Mapped: 217208 kB' 'Shmem: 6376448 kB' 'KReclaimable: 186636 kB' 'Slab: 553652 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367016 kB' 'KernelStack: 12880 kB' 'PageTables: 8096 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37609860 kB' 'Committed_AS: 7972012 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196112 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.534 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.535 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22213684 kB' 'MemUsed: 10663256 kB' 'SwapCached: 0 kB' 'Active: 5113360 kB' 'Inactive: 3356980 kB' 'Active(anon): 4841428 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361484 kB' 'Mapped: 96076 kB' 'AnonPages: 112064 kB' 'Shmem: 4732572 kB' 'KernelStack: 7096 kB' 'PageTables: 3936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 305036 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220556 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.536 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.537 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 23627388 kB' 'MemUsed: 4037384 kB' 'SwapCached: 0 kB' 'Active: 2177908 kB' 'Inactive: 149212 kB' 'Active(anon): 2054344 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 149212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1916656 kB' 'Mapped: 121072 kB' 'AnonPages: 410620 kB' 'Shmem: 1643880 kB' 'KernelStack: 6376 kB' 'PageTables: 5272 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102156 kB' 'Slab: 248616 kB' 'SReclaimable: 102156 kB' 'SUnreclaim: 146460 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.538 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:18.539 node0=512 expecting 513 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:18.539 node1=513 expecting 512 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:18.539 00:03:18.539 real 0m1.427s 00:03:18.539 user 0m0.626s 00:03:18.539 sys 0m0.761s 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:18.539 12:04:11 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:18.539 ************************************ 00:03:18.539 END TEST odd_alloc 00:03:18.539 ************************************ 00:03:18.539 12:04:11 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:18.539 12:04:11 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:18.539 12:04:11 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:18.539 12:04:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:18.799 ************************************ 00:03:18.799 START TEST custom_alloc 00:03:18.799 ************************************ 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1125 -- # custom_alloc 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:18.799 12:04:11 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:19.737 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.737 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:19.737 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.737 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.737 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.737 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.737 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.737 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.738 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:19.738 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:19.738 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:19.738 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:19.738 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:19.738 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:19.738 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:19.738 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:19.738 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.003 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44766592 kB' 'MemAvailable: 48267760 kB' 'Buffers: 2704 kB' 'Cached: 10275540 kB' 'SwapCached: 0 kB' 'Active: 7289840 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894344 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521056 kB' 'Mapped: 217180 kB' 'Shmem: 6376556 kB' 'KReclaimable: 186636 kB' 'Slab: 553320 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366684 kB' 'KernelStack: 12816 kB' 'PageTables: 7876 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7971216 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.004 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44769048 kB' 'MemAvailable: 48270216 kB' 'Buffers: 2704 kB' 'Cached: 10275544 kB' 'SwapCached: 0 kB' 'Active: 7289416 kB' 'Inactive: 3506192 kB' 'Active(anon): 6893920 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520588 kB' 'Mapped: 217144 kB' 'Shmem: 6376560 kB' 'KReclaimable: 186636 kB' 'Slab: 553316 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366680 kB' 'KernelStack: 12832 kB' 'PageTables: 7844 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7971236 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196176 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.005 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.006 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44769856 kB' 'MemAvailable: 48271024 kB' 'Buffers: 2704 kB' 'Cached: 10275560 kB' 'SwapCached: 0 kB' 'Active: 7289572 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894076 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520720 kB' 'Mapped: 217144 kB' 'Shmem: 6376576 kB' 'KReclaimable: 186636 kB' 'Slab: 553356 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366720 kB' 'KernelStack: 12816 kB' 'PageTables: 7820 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7971256 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.007 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.008 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:20.009 nr_hugepages=1536 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:20.009 resv_hugepages=0 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:20.009 surplus_hugepages=0 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:20.009 anon_hugepages=0 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.009 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 44769856 kB' 'MemAvailable: 48271024 kB' 'Buffers: 2704 kB' 'Cached: 10275580 kB' 'SwapCached: 0 kB' 'Active: 7289588 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894092 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520752 kB' 'Mapped: 217144 kB' 'Shmem: 6376596 kB' 'KReclaimable: 186636 kB' 'Slab: 553356 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366720 kB' 'KernelStack: 12832 kB' 'PageTables: 7872 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37086596 kB' 'Committed_AS: 7971276 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196160 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.010 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.011 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 22200748 kB' 'MemUsed: 10676192 kB' 'SwapCached: 0 kB' 'Active: 5111364 kB' 'Inactive: 3356980 kB' 'Active(anon): 4839432 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361624 kB' 'Mapped: 96072 kB' 'AnonPages: 109896 kB' 'Shmem: 4732712 kB' 'KernelStack: 6648 kB' 'PageTables: 3024 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 304832 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.012 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.013 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.272 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 27664772 kB' 'MemFree: 22568128 kB' 'MemUsed: 5096644 kB' 'SwapCached: 0 kB' 'Active: 2177900 kB' 'Inactive: 149212 kB' 'Active(anon): 2054336 kB' 'Inactive(anon): 0 kB' 'Active(file): 123564 kB' 'Inactive(file): 149212 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 1916664 kB' 'Mapped: 121072 kB' 'AnonPages: 410528 kB' 'Shmem: 1643888 kB' 'KernelStack: 6168 kB' 'PageTables: 4796 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 102156 kB' 'Slab: 248524 kB' 'SReclaimable: 102156 kB' 'SUnreclaim: 146368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.273 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:20.274 node0=512 expecting 512 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:20.274 node1=1024 expecting 1024 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:20.274 00:03:20.274 real 0m1.481s 00:03:20.274 user 0m0.604s 00:03:20.274 sys 0m0.842s 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:20.274 12:04:13 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:20.274 ************************************ 00:03:20.274 END TEST custom_alloc 00:03:20.274 ************************************ 00:03:20.274 12:04:13 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:20.274 12:04:13 setup.sh.hugepages -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:20.274 12:04:13 setup.sh.hugepages -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:20.274 12:04:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.274 ************************************ 00:03:20.274 START TEST no_shrink_alloc 00:03:20.274 ************************************ 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1125 -- # no_shrink_alloc 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.274 12:04:13 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:21.210 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.210 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:21.210 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.210 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.210 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.210 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.210 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.210 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.210 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.210 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:21.210 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:21.210 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:21.210 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:21.210 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:21.210 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:21.210 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:21.210 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45808716 kB' 'MemAvailable: 49309884 kB' 'Buffers: 2704 kB' 'Cached: 10275664 kB' 'SwapCached: 0 kB' 'Active: 7294776 kB' 'Inactive: 3506192 kB' 'Active(anon): 6899280 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 525788 kB' 'Mapped: 217696 kB' 'Shmem: 6376680 kB' 'KReclaimable: 186636 kB' 'Slab: 553348 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366712 kB' 'KernelStack: 12832 kB' 'PageTables: 7788 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7977624 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196244 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.476 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.477 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45808476 kB' 'MemAvailable: 49309644 kB' 'Buffers: 2704 kB' 'Cached: 10275664 kB' 'SwapCached: 0 kB' 'Active: 7295472 kB' 'Inactive: 3506192 kB' 'Active(anon): 6899976 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526464 kB' 'Mapped: 218152 kB' 'Shmem: 6376680 kB' 'KReclaimable: 186636 kB' 'Slab: 553376 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366740 kB' 'KernelStack: 12880 kB' 'PageTables: 7900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7977640 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196196 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.478 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.479 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45810572 kB' 'MemAvailable: 49311740 kB' 'Buffers: 2704 kB' 'Cached: 10275688 kB' 'SwapCached: 0 kB' 'Active: 7291844 kB' 'Inactive: 3506192 kB' 'Active(anon): 6896348 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522944 kB' 'Mapped: 218152 kB' 'Shmem: 6376704 kB' 'KReclaimable: 186636 kB' 'Slab: 553380 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366744 kB' 'KernelStack: 12944 kB' 'PageTables: 8048 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7973956 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.480 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.481 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:21.482 nr_hugepages=1024 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:21.482 resv_hugepages=0 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:21.482 surplus_hugepages=0 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:21.482 anon_hugepages=0 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.482 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45804068 kB' 'MemAvailable: 49305236 kB' 'Buffers: 2704 kB' 'Cached: 10275708 kB' 'SwapCached: 0 kB' 'Active: 7295220 kB' 'Inactive: 3506192 kB' 'Active(anon): 6899724 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 526352 kB' 'Mapped: 217672 kB' 'Shmem: 6376724 kB' 'KReclaimable: 186636 kB' 'Slab: 553380 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 366744 kB' 'KernelStack: 12928 kB' 'PageTables: 7976 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7977684 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196212 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.483 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.484 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21141812 kB' 'MemUsed: 11735128 kB' 'SwapCached: 0 kB' 'Active: 5116844 kB' 'Inactive: 3356980 kB' 'Active(anon): 4844912 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361704 kB' 'Mapped: 96852 kB' 'AnonPages: 115300 kB' 'Shmem: 4732792 kB' 'KernelStack: 6648 kB' 'PageTables: 3052 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 304872 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220392 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.485 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:21.486 node0=1024 expecting 1024 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:21.486 12:04:14 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.870 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.870 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:22.870 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.870 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.870 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.870 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.870 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.870 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.870 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.870 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:03:22.870 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:03:22.870 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:03:22.870 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:03:22.870 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:03:22.870 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:03:22.870 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:03:22.870 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:03:22.870 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:22.870 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:22.870 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45800880 kB' 'MemAvailable: 49302048 kB' 'Buffers: 2704 kB' 'Cached: 10275772 kB' 'SwapCached: 0 kB' 'Active: 7291108 kB' 'Inactive: 3506192 kB' 'Active(anon): 6895612 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 522048 kB' 'Mapped: 217284 kB' 'Shmem: 6376788 kB' 'KReclaimable: 186636 kB' 'Slab: 553744 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367108 kB' 'KernelStack: 12896 kB' 'PageTables: 7884 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7971740 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.871 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45800604 kB' 'MemAvailable: 49301772 kB' 'Buffers: 2704 kB' 'Cached: 10275776 kB' 'SwapCached: 0 kB' 'Active: 7290180 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894684 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521136 kB' 'Mapped: 217244 kB' 'Shmem: 6376792 kB' 'KReclaimable: 186636 kB' 'Slab: 553736 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367100 kB' 'KernelStack: 12928 kB' 'PageTables: 7948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7971756 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.872 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.873 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.874 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45801112 kB' 'MemAvailable: 49302280 kB' 'Buffers: 2704 kB' 'Cached: 10275796 kB' 'SwapCached: 0 kB' 'Active: 7290100 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894604 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 520988 kB' 'Mapped: 217164 kB' 'Shmem: 6376812 kB' 'KReclaimable: 186636 kB' 'Slab: 553676 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367040 kB' 'KernelStack: 12928 kB' 'PageTables: 7944 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7971780 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196192 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.875 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.876 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:22.877 nr_hugepages=1024 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:22.877 resv_hugepages=0 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:22.877 surplus_hugepages=0 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:22.877 anon_hugepages=0 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 60541712 kB' 'MemFree: 45802512 kB' 'MemAvailable: 49303680 kB' 'Buffers: 2704 kB' 'Cached: 10275816 kB' 'SwapCached: 0 kB' 'Active: 7290076 kB' 'Inactive: 3506192 kB' 'Active(anon): 6894580 kB' 'Inactive(anon): 0 kB' 'Active(file): 395496 kB' 'Inactive(file): 3506192 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 521032 kB' 'Mapped: 217052 kB' 'Shmem: 6376832 kB' 'KReclaimable: 186636 kB' 'Slab: 553676 kB' 'SReclaimable: 186636 kB' 'SUnreclaim: 367040 kB' 'KernelStack: 12944 kB' 'PageTables: 8016 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 37610884 kB' 'Committed_AS: 7971432 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 196208 kB' 'VmallocChunk: 0 kB' 'Percpu: 35520 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 1832540 kB' 'DirectMap2M: 14864384 kB' 'DirectMap1G: 52428800 kB' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.877 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.878 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 32876940 kB' 'MemFree: 21138892 kB' 'MemUsed: 11738048 kB' 'SwapCached: 0 kB' 'Active: 5110976 kB' 'Inactive: 3356980 kB' 'Active(anon): 4839044 kB' 'Inactive(anon): 0 kB' 'Active(file): 271932 kB' 'Inactive(file): 3356980 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 8361704 kB' 'Mapped: 96092 kB' 'AnonPages: 109384 kB' 'Shmem: 4732792 kB' 'KernelStack: 6680 kB' 'PageTables: 2988 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 84480 kB' 'Slab: 305016 kB' 'SReclaimable: 84480 kB' 'SUnreclaim: 220536 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.879 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:22.880 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:22.881 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:22.881 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:22.881 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:22.881 12:04:15 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:22.881 node0=1024 expecting 1024 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:22.881 00:03:22.881 real 0m2.686s 00:03:22.881 user 0m1.112s 00:03:22.881 sys 0m1.488s 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.881 12:04:16 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:22.881 ************************************ 00:03:22.881 END TEST no_shrink_alloc 00:03:22.881 ************************************ 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:22.881 12:04:16 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:22.881 00:03:22.881 real 0m11.037s 00:03:22.881 user 0m4.265s 00:03:22.881 sys 0m5.672s 00:03:22.881 12:04:16 setup.sh.hugepages -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:22.881 12:04:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:22.881 ************************************ 00:03:22.881 END TEST hugepages 00:03:22.881 ************************************ 00:03:22.881 12:04:16 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:22.881 12:04:16 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:22.881 12:04:16 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:22.881 12:04:16 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:22.881 ************************************ 00:03:22.881 START TEST driver 00:03:22.881 ************************************ 00:03:22.881 12:04:16 setup.sh.driver -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:23.141 * Looking for test storage... 00:03:23.141 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:23.141 12:04:16 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:23.141 12:04:16 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:23.141 12:04:16 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:25.673 12:04:18 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:25.673 12:04:18 setup.sh.driver -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:25.673 12:04:18 setup.sh.driver -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:25.673 12:04:18 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:25.673 ************************************ 00:03:25.673 START TEST guess_driver 00:03:25.673 ************************************ 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- common/autotest_common.sh@1125 -- # guess_driver 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 141 > 0 )) 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:25.673 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:25.673 Looking for driver=vfio-pci 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:25.673 12:04:18 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.610 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.870 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:26.871 12:04:19 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:27.809 12:04:20 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:30.348 00:03:30.348 real 0m4.803s 00:03:30.348 user 0m1.099s 00:03:30.348 sys 0m1.805s 00:03:30.348 12:04:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.348 12:04:23 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:30.348 ************************************ 00:03:30.348 END TEST guess_driver 00:03:30.348 ************************************ 00:03:30.348 00:03:30.348 real 0m7.312s 00:03:30.348 user 0m1.635s 00:03:30.348 sys 0m2.772s 00:03:30.348 12:04:23 setup.sh.driver -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:30.348 12:04:23 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:30.348 ************************************ 00:03:30.348 END TEST driver 00:03:30.348 ************************************ 00:03:30.348 12:04:23 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:30.348 12:04:23 setup.sh -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:30.348 12:04:23 setup.sh -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:30.348 12:04:23 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:30.348 ************************************ 00:03:30.348 START TEST devices 00:03:30.349 ************************************ 00:03:30.349 12:04:23 setup.sh.devices -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:30.349 * Looking for test storage... 00:03:30.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:30.349 12:04:23 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:30.349 12:04:23 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:30.349 12:04:23 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:30.349 12:04:23 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:32.257 12:04:24 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:88:00.0 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\8\8\:\0\0\.\0* ]] 00:03:32.257 12:04:24 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:32.257 12:04:24 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:32.257 12:04:24 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:32.257 No valid GPT data, bailing 00:03:32.257 12:04:25 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:32.257 12:04:25 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:32.257 12:04:25 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:32.257 12:04:25 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:32.257 12:04:25 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:32.257 12:04:25 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:88:00.0 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:32.257 12:04:25 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:32.257 12:04:25 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:32.257 12:04:25 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:32.257 12:04:25 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:32.257 ************************************ 00:03:32.257 START TEST nvme_mount 00:03:32.257 ************************************ 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1125 -- # nvme_mount 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:32.257 12:04:25 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:33.202 Creating new GPT entries in memory. 00:03:33.202 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:33.202 other utilities. 00:03:33.202 12:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:33.202 12:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:33.202 12:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:33.202 12:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:33.202 12:04:26 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:34.141 Creating new GPT entries in memory. 00:03:34.141 The operation has completed successfully. 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 2741601 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:88:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:34.141 12:04:27 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:35.077 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:35.340 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:35.340 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:35.600 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:35.600 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:35.600 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:35.600 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:88:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.600 12:04:28 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:36.978 12:04:29 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:88:00.0 data@nvme0n1 '' '' 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:36.978 12:04:30 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:38.351 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:38.351 00:03:38.351 real 0m6.324s 00:03:38.351 user 0m1.486s 00:03:38.351 sys 0m2.383s 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:38.351 12:04:31 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:03:38.351 ************************************ 00:03:38.351 END TEST nvme_mount 00:03:38.351 ************************************ 00:03:38.351 12:04:31 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:03:38.351 12:04:31 setup.sh.devices -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:03:38.351 12:04:31 setup.sh.devices -- common/autotest_common.sh@1107 -- # xtrace_disable 00:03:38.351 12:04:31 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:38.351 ************************************ 00:03:38.351 START TEST dm_mount 00:03:38.351 ************************************ 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- common/autotest_common.sh@1125 -- # dm_mount 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:38.351 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:38.352 12:04:31 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:03:39.287 Creating new GPT entries in memory. 00:03:39.287 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:39.287 other utilities. 00:03:39.287 12:04:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:39.287 12:04:32 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:39.287 12:04:32 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:39.287 12:04:32 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:39.287 12:04:32 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:40.667 Creating new GPT entries in memory. 00:03:40.667 The operation has completed successfully. 00:03:40.667 12:04:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:40.667 12:04:33 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:40.667 12:04:33 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:40.667 12:04:33 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:40.667 12:04:33 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:03:41.603 The operation has completed successfully. 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 2743988 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-0 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-0 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-0 ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-0 ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:88:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:41.603 12:04:34 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:42.537 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.796 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:42.796 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:88:00.0 holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 '' '' 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:88:00.0 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:88:00.0 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:42.797 12:04:35 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:88:00.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-0,holder@nvme0n1p2:dm-0, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\0\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\0* ]] 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.733 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\8\8\:\0\0\.\0 ]] 00:03:43.734 12:04:36 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:03:43.994 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:03:43.994 00:03:43.994 real 0m5.618s 00:03:43.994 user 0m0.907s 00:03:43.994 sys 0m1.575s 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:43.994 12:04:37 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:03:43.994 ************************************ 00:03:43.994 END TEST dm_mount 00:03:43.994 ************************************ 00:03:43.994 12:04:37 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:03:43.994 12:04:37 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:03:43.995 12:04:37 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:43.995 12:04:37 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:43.995 12:04:37 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:03:43.995 12:04:37 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:03:43.995 12:04:37 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:03:44.254 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:03:44.254 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:03:44.254 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:03:44.254 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:03:44.254 12:04:37 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:03:44.254 00:03:44.254 real 0m13.927s 00:03:44.254 user 0m3.090s 00:03:44.254 sys 0m5.006s 00:03:44.254 12:04:37 setup.sh.devices -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.254 12:04:37 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:44.254 ************************************ 00:03:44.254 END TEST devices 00:03:44.254 ************************************ 00:03:44.254 00:03:44.254 real 0m42.768s 00:03:44.254 user 0m12.179s 00:03:44.254 sys 0m18.759s 00:03:44.254 12:04:37 setup.sh -- common/autotest_common.sh@1126 -- # xtrace_disable 00:03:44.254 12:04:37 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:44.254 ************************************ 00:03:44.254 END TEST setup.sh 00:03:44.254 ************************************ 00:03:44.254 12:04:37 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:45.633 Hugepages 00:03:45.633 node hugesize free / total 00:03:45.633 node0 1048576kB 0 / 0 00:03:45.633 node0 2048kB 2048 / 2048 00:03:45.633 node1 1048576kB 0 / 0 00:03:45.633 node1 2048kB 0 / 0 00:03:45.633 00:03:45.633 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:45.633 I/OAT 0000:00:04.0 8086 0e20 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.1 8086 0e21 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.2 8086 0e22 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.3 8086 0e23 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.4 8086 0e24 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.5 8086 0e25 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.6 8086 0e26 0 ioatdma - - 00:03:45.633 I/OAT 0000:00:04.7 8086 0e27 0 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.0 8086 0e20 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.1 8086 0e21 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.2 8086 0e22 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.3 8086 0e23 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.4 8086 0e24 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.5 8086 0e25 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.6 8086 0e26 1 ioatdma - - 00:03:45.633 I/OAT 0000:80:04.7 8086 0e27 1 ioatdma - - 00:03:45.633 NVMe 0000:88:00.0 8086 0a54 1 nvme nvme0 nvme0n1 00:03:45.633 12:04:38 -- spdk/autotest.sh@130 -- # uname -s 00:03:45.633 12:04:38 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:03:45.633 12:04:38 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:03:45.633 12:04:38 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:46.572 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.572 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.572 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.572 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.572 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.832 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.832 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.832 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:46.832 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:47.771 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:47.771 12:04:40 -- common/autotest_common.sh@1532 -- # sleep 1 00:03:48.710 12:04:41 -- common/autotest_common.sh@1533 -- # bdfs=() 00:03:48.710 12:04:41 -- common/autotest_common.sh@1533 -- # local bdfs 00:03:48.710 12:04:41 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:03:48.710 12:04:41 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:03:48.710 12:04:41 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:48.710 12:04:41 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:48.710 12:04:41 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:48.710 12:04:41 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:48.710 12:04:41 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:48.968 12:04:41 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:48.968 12:04:41 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:48.968 12:04:41 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:49.908 Waiting for block devices as requested 00:03:49.908 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:03:50.175 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:50.175 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:50.435 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:50.435 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:50.435 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:50.435 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:50.695 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:50.695 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:50.695 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:03:50.695 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:03:50.953 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:03:50.953 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:03:50.953 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:03:51.213 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:03:51.213 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:03:51.213 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:03:51.473 12:04:44 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:03:51.473 12:04:44 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:88:00.0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1502 -- # grep 0000:88:00.0/nvme/nvme 00:03:51.473 12:04:44 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 ]] 00:03:51.473 12:04:44 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:80/0000:80:03.0/0000:88:00.0/nvme/nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:03:51.473 12:04:44 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1545 -- # grep oacs 00:03:51.473 12:04:44 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:03:51.473 12:04:44 -- common/autotest_common.sh@1545 -- # oacs=' 0xf' 00:03:51.473 12:04:44 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:03:51.473 12:04:44 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:03:51.473 12:04:44 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:03:51.473 12:04:44 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:03:51.473 12:04:44 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:03:51.473 12:04:44 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:03:51.473 12:04:44 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:03:51.473 12:04:44 -- common/autotest_common.sh@1557 -- # continue 00:03:51.473 12:04:44 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:03:51.473 12:04:44 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:51.473 12:04:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.473 12:04:44 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:03:51.473 12:04:44 -- common/autotest_common.sh@724 -- # xtrace_disable 00:03:51.473 12:04:44 -- common/autotest_common.sh@10 -- # set +x 00:03:51.473 12:04:44 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:52.449 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.449 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.449 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.449 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.449 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.709 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.709 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.709 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:03:52.709 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:03:53.649 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:03:53.649 12:04:46 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:03:53.649 12:04:46 -- common/autotest_common.sh@730 -- # xtrace_disable 00:03:53.649 12:04:46 -- common/autotest_common.sh@10 -- # set +x 00:03:53.649 12:04:46 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:03:53.649 12:04:46 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:03:53.649 12:04:46 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:03:53.649 12:04:46 -- common/autotest_common.sh@1577 -- # bdfs=() 00:03:53.649 12:04:46 -- common/autotest_common.sh@1577 -- # local bdfs 00:03:53.649 12:04:46 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:03:53.649 12:04:46 -- common/autotest_common.sh@1513 -- # bdfs=() 00:03:53.649 12:04:46 -- common/autotest_common.sh@1513 -- # local bdfs 00:03:53.649 12:04:46 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.649 12:04:46 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.649 12:04:46 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:03:53.907 12:04:46 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:03:53.907 12:04:46 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:03:53.907 12:04:46 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:03:53.907 12:04:46 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:88:00.0/device 00:03:53.907 12:04:46 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:03:53.907 12:04:46 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:03:53.907 12:04:46 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:03:53.907 12:04:46 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:88:00.0 00:03:53.907 12:04:46 -- common/autotest_common.sh@1592 -- # [[ -z 0000:88:00.0 ]] 00:03:53.907 12:04:46 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=2749166 00:03:53.907 12:04:46 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:03:53.907 12:04:46 -- common/autotest_common.sh@1598 -- # waitforlisten 2749166 00:03:53.907 12:04:46 -- common/autotest_common.sh@831 -- # '[' -z 2749166 ']' 00:03:53.907 12:04:46 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:03:53.907 12:04:46 -- common/autotest_common.sh@836 -- # local max_retries=100 00:03:53.907 12:04:46 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:03:53.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:03:53.907 12:04:46 -- common/autotest_common.sh@840 -- # xtrace_disable 00:03:53.907 12:04:46 -- common/autotest_common.sh@10 -- # set +x 00:03:53.907 [2024-07-26 12:04:46.979073] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:03:53.907 [2024-07-26 12:04:46.979181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2749166 ] 00:03:53.907 EAL: No free 2048 kB hugepages reported on node 1 00:03:53.907 [2024-07-26 12:04:47.041237] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:03:53.907 [2024-07-26 12:04:47.157866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:03:54.843 12:04:47 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:03:54.843 12:04:47 -- common/autotest_common.sh@864 -- # return 0 00:03:54.843 12:04:47 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:03:54.843 12:04:47 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:03:54.843 12:04:47 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:88:00.0 00:03:58.137 nvme0n1 00:03:58.137 12:04:51 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:03:58.137 [2024-07-26 12:04:51.236013] nvme_opal.c:2063:spdk_opal_cmd_revert_tper: *ERROR*: Error on starting admin SP session with error 18 00:03:58.137 [2024-07-26 12:04:51.236079] vbdev_opal_rpc.c: 134:rpc_bdev_nvme_opal_revert: *ERROR*: Revert TPer failure: 18 00:03:58.137 request: 00:03:58.137 { 00:03:58.137 "nvme_ctrlr_name": "nvme0", 00:03:58.137 "password": "test", 00:03:58.137 "method": "bdev_nvme_opal_revert", 00:03:58.137 "req_id": 1 00:03:58.137 } 00:03:58.137 Got JSON-RPC error response 00:03:58.137 response: 00:03:58.137 { 00:03:58.137 "code": -32603, 00:03:58.137 "message": "Internal error" 00:03:58.137 } 00:03:58.137 12:04:51 -- common/autotest_common.sh@1604 -- # true 00:03:58.137 12:04:51 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:03:58.137 12:04:51 -- common/autotest_common.sh@1608 -- # killprocess 2749166 00:03:58.137 12:04:51 -- common/autotest_common.sh@950 -- # '[' -z 2749166 ']' 00:03:58.137 12:04:51 -- common/autotest_common.sh@954 -- # kill -0 2749166 00:03:58.137 12:04:51 -- common/autotest_common.sh@955 -- # uname 00:03:58.137 12:04:51 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:03:58.137 12:04:51 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2749166 00:03:58.137 12:04:51 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:03:58.137 12:04:51 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:03:58.137 12:04:51 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2749166' 00:03:58.137 killing process with pid 2749166 00:03:58.137 12:04:51 -- common/autotest_common.sh@969 -- # kill 2749166 00:03:58.137 12:04:51 -- common/autotest_common.sh@974 -- # wait 2749166 00:04:00.043 12:04:53 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:00.043 12:04:53 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:00.043 12:04:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.043 12:04:53 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:00.043 12:04:53 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:00.043 12:04:53 -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:00.043 12:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:00.043 12:04:53 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:00.043 12:04:53 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.043 12:04:53 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.043 12:04:53 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.043 12:04:53 -- common/autotest_common.sh@10 -- # set +x 00:04:00.043 ************************************ 00:04:00.043 START TEST env 00:04:00.043 ************************************ 00:04:00.043 12:04:53 env -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:00.043 * Looking for test storage... 00:04:00.043 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:00.043 12:04:53 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.043 12:04:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.043 12:04:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.043 12:04:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.043 ************************************ 00:04:00.043 START TEST env_memory 00:04:00.043 ************************************ 00:04:00.043 12:04:53 env.env_memory -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:00.043 00:04:00.043 00:04:00.043 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.043 http://cunit.sourceforge.net/ 00:04:00.043 00:04:00.043 00:04:00.043 Suite: memory 00:04:00.043 Test: alloc and free memory map ...[2024-07-26 12:04:53.253476] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:00.043 passed 00:04:00.043 Test: mem map translation ...[2024-07-26 12:04:53.273942] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:00.044 [2024-07-26 12:04:53.273965] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:00.044 [2024-07-26 12:04:53.274015] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:00.044 [2024-07-26 12:04:53.274028] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:00.302 passed 00:04:00.302 Test: mem map registration ...[2024-07-26 12:04:53.316547] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:00.303 [2024-07-26 12:04:53.316568] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:00.303 passed 00:04:00.303 Test: mem map adjacent registrations ...passed 00:04:00.303 00:04:00.303 Run Summary: Type Total Ran Passed Failed Inactive 00:04:00.303 suites 1 1 n/a 0 0 00:04:00.303 tests 4 4 4 0 0 00:04:00.303 asserts 152 152 152 0 n/a 00:04:00.303 00:04:00.303 Elapsed time = 0.140 seconds 00:04:00.303 00:04:00.303 real 0m0.147s 00:04:00.303 user 0m0.139s 00:04:00.303 sys 0m0.008s 00:04:00.303 12:04:53 env.env_memory -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:00.303 12:04:53 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:00.303 ************************************ 00:04:00.303 END TEST env_memory 00:04:00.303 ************************************ 00:04:00.303 12:04:53 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.303 12:04:53 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:00.303 12:04:53 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:00.303 12:04:53 env -- common/autotest_common.sh@10 -- # set +x 00:04:00.303 ************************************ 00:04:00.303 START TEST env_vtophys 00:04:00.303 ************************************ 00:04:00.303 12:04:53 env.env_vtophys -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:00.303 EAL: lib.eal log level changed from notice to debug 00:04:00.303 EAL: Detected lcore 0 as core 0 on socket 0 00:04:00.303 EAL: Detected lcore 1 as core 1 on socket 0 00:04:00.303 EAL: Detected lcore 2 as core 2 on socket 0 00:04:00.303 EAL: Detected lcore 3 as core 3 on socket 0 00:04:00.303 EAL: Detected lcore 4 as core 4 on socket 0 00:04:00.303 EAL: Detected lcore 5 as core 5 on socket 0 00:04:00.303 EAL: Detected lcore 6 as core 8 on socket 0 00:04:00.303 EAL: Detected lcore 7 as core 9 on socket 0 00:04:00.303 EAL: Detected lcore 8 as core 10 on socket 0 00:04:00.303 EAL: Detected lcore 9 as core 11 on socket 0 00:04:00.303 EAL: Detected lcore 10 as core 12 on socket 0 00:04:00.303 EAL: Detected lcore 11 as core 13 on socket 0 00:04:00.303 EAL: Detected lcore 12 as core 0 on socket 1 00:04:00.303 EAL: Detected lcore 13 as core 1 on socket 1 00:04:00.303 EAL: Detected lcore 14 as core 2 on socket 1 00:04:00.303 EAL: Detected lcore 15 as core 3 on socket 1 00:04:00.303 EAL: Detected lcore 16 as core 4 on socket 1 00:04:00.303 EAL: Detected lcore 17 as core 5 on socket 1 00:04:00.303 EAL: Detected lcore 18 as core 8 on socket 1 00:04:00.303 EAL: Detected lcore 19 as core 9 on socket 1 00:04:00.303 EAL: Detected lcore 20 as core 10 on socket 1 00:04:00.303 EAL: Detected lcore 21 as core 11 on socket 1 00:04:00.303 EAL: Detected lcore 22 as core 12 on socket 1 00:04:00.303 EAL: Detected lcore 23 as core 13 on socket 1 00:04:00.303 EAL: Detected lcore 24 as core 0 on socket 0 00:04:00.303 EAL: Detected lcore 25 as core 1 on socket 0 00:04:00.303 EAL: Detected lcore 26 as core 2 on socket 0 00:04:00.303 EAL: Detected lcore 27 as core 3 on socket 0 00:04:00.303 EAL: Detected lcore 28 as core 4 on socket 0 00:04:00.303 EAL: Detected lcore 29 as core 5 on socket 0 00:04:00.303 EAL: Detected lcore 30 as core 8 on socket 0 00:04:00.303 EAL: Detected lcore 31 as core 9 on socket 0 00:04:00.303 EAL: Detected lcore 32 as core 10 on socket 0 00:04:00.303 EAL: Detected lcore 33 as core 11 on socket 0 00:04:00.303 EAL: Detected lcore 34 as core 12 on socket 0 00:04:00.303 EAL: Detected lcore 35 as core 13 on socket 0 00:04:00.303 EAL: Detected lcore 36 as core 0 on socket 1 00:04:00.303 EAL: Detected lcore 37 as core 1 on socket 1 00:04:00.303 EAL: Detected lcore 38 as core 2 on socket 1 00:04:00.303 EAL: Detected lcore 39 as core 3 on socket 1 00:04:00.303 EAL: Detected lcore 40 as core 4 on socket 1 00:04:00.303 EAL: Detected lcore 41 as core 5 on socket 1 00:04:00.303 EAL: Detected lcore 42 as core 8 on socket 1 00:04:00.303 EAL: Detected lcore 43 as core 9 on socket 1 00:04:00.303 EAL: Detected lcore 44 as core 10 on socket 1 00:04:00.303 EAL: Detected lcore 45 as core 11 on socket 1 00:04:00.303 EAL: Detected lcore 46 as core 12 on socket 1 00:04:00.303 EAL: Detected lcore 47 as core 13 on socket 1 00:04:00.303 EAL: Maximum logical cores by configuration: 128 00:04:00.303 EAL: Detected CPU lcores: 48 00:04:00.303 EAL: Detected NUMA nodes: 2 00:04:00.303 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:00.303 EAL: Detected shared linkage of DPDK 00:04:00.303 EAL: No shared files mode enabled, IPC will be disabled 00:04:00.303 EAL: Bus pci wants IOVA as 'DC' 00:04:00.303 EAL: Buses did not request a specific IOVA mode. 00:04:00.303 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:00.303 EAL: Selected IOVA mode 'VA' 00:04:00.303 EAL: No free 2048 kB hugepages reported on node 1 00:04:00.303 EAL: Probing VFIO support... 00:04:00.303 EAL: IOMMU type 1 (Type 1) is supported 00:04:00.303 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:00.303 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:00.303 EAL: VFIO support initialized 00:04:00.303 EAL: Ask a virtual area of 0x2e000 bytes 00:04:00.303 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:00.303 EAL: Setting up physically contiguous memory... 00:04:00.303 EAL: Setting maximum number of open files to 524288 00:04:00.303 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:00.303 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:00.303 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:00.303 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:00.303 EAL: Ask a virtual area of 0x61000 bytes 00:04:00.303 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:00.303 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:00.303 EAL: Ask a virtual area of 0x400000000 bytes 00:04:00.303 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:00.303 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:00.303 EAL: Hugepages will be freed exactly as allocated. 00:04:00.303 EAL: No shared files mode enabled, IPC is disabled 00:04:00.303 EAL: No shared files mode enabled, IPC is disabled 00:04:00.303 EAL: TSC frequency is ~2700000 KHz 00:04:00.303 EAL: Main lcore 0 is ready (tid=7fcd88d63a00;cpuset=[0]) 00:04:00.303 EAL: Trying to obtain current memory policy. 00:04:00.303 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.303 EAL: Restoring previous memory policy: 0 00:04:00.303 EAL: request: mp_malloc_sync 00:04:00.303 EAL: No shared files mode enabled, IPC is disabled 00:04:00.303 EAL: Heap on socket 0 was expanded by 2MB 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:00.304 EAL: Mem event callback 'spdk:(nil)' registered 00:04:00.304 00:04:00.304 00:04:00.304 CUnit - A unit testing framework for C - Version 2.1-3 00:04:00.304 http://cunit.sourceforge.net/ 00:04:00.304 00:04:00.304 00:04:00.304 Suite: components_suite 00:04:00.304 Test: vtophys_malloc_test ...passed 00:04:00.304 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.304 EAL: Restoring previous memory policy: 4 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was expanded by 4MB 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was shrunk by 4MB 00:04:00.304 EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.304 EAL: Restoring previous memory policy: 4 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was expanded by 6MB 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was shrunk by 6MB 00:04:00.304 EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.304 EAL: Restoring previous memory policy: 4 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was expanded by 10MB 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was shrunk by 10MB 00:04:00.304 EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.304 EAL: Restoring previous memory policy: 4 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was expanded by 18MB 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was shrunk by 18MB 00:04:00.304 EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.304 EAL: Restoring previous memory policy: 4 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was expanded by 34MB 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was shrunk by 34MB 00:04:00.304 EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.304 EAL: Restoring previous memory policy: 4 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was expanded by 66MB 00:04:00.304 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.304 EAL: request: mp_malloc_sync 00:04:00.304 EAL: No shared files mode enabled, IPC is disabled 00:04:00.304 EAL: Heap on socket 0 was shrunk by 66MB 00:04:00.304 EAL: Trying to obtain current memory policy. 00:04:00.304 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.563 EAL: Restoring previous memory policy: 4 00:04:00.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.563 EAL: request: mp_malloc_sync 00:04:00.563 EAL: No shared files mode enabled, IPC is disabled 00:04:00.563 EAL: Heap on socket 0 was expanded by 130MB 00:04:00.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.563 EAL: request: mp_malloc_sync 00:04:00.563 EAL: No shared files mode enabled, IPC is disabled 00:04:00.563 EAL: Heap on socket 0 was shrunk by 130MB 00:04:00.563 EAL: Trying to obtain current memory policy. 00:04:00.563 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.563 EAL: Restoring previous memory policy: 4 00:04:00.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.563 EAL: request: mp_malloc_sync 00:04:00.563 EAL: No shared files mode enabled, IPC is disabled 00:04:00.563 EAL: Heap on socket 0 was expanded by 258MB 00:04:00.563 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.821 EAL: request: mp_malloc_sync 00:04:00.821 EAL: No shared files mode enabled, IPC is disabled 00:04:00.821 EAL: Heap on socket 0 was shrunk by 258MB 00:04:00.821 EAL: Trying to obtain current memory policy. 00:04:00.821 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:00.821 EAL: Restoring previous memory policy: 4 00:04:00.821 EAL: Calling mem event callback 'spdk:(nil)' 00:04:00.821 EAL: request: mp_malloc_sync 00:04:00.821 EAL: No shared files mode enabled, IPC is disabled 00:04:00.821 EAL: Heap on socket 0 was expanded by 514MB 00:04:01.080 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.080 EAL: request: mp_malloc_sync 00:04:01.080 EAL: No shared files mode enabled, IPC is disabled 00:04:01.080 EAL: Heap on socket 0 was shrunk by 514MB 00:04:01.080 EAL: Trying to obtain current memory policy. 00:04:01.080 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:01.339 EAL: Restoring previous memory policy: 4 00:04:01.339 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.339 EAL: request: mp_malloc_sync 00:04:01.339 EAL: No shared files mode enabled, IPC is disabled 00:04:01.339 EAL: Heap on socket 0 was expanded by 1026MB 00:04:01.598 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.857 EAL: request: mp_malloc_sync 00:04:01.857 EAL: No shared files mode enabled, IPC is disabled 00:04:01.857 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:01.857 passed 00:04:01.857 00:04:01.857 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.857 suites 1 1 n/a 0 0 00:04:01.857 tests 2 2 2 0 0 00:04:01.857 asserts 497 497 497 0 n/a 00:04:01.857 00:04:01.857 Elapsed time = 1.384 seconds 00:04:01.857 EAL: Calling mem event callback 'spdk:(nil)' 00:04:01.857 EAL: request: mp_malloc_sync 00:04:01.857 EAL: No shared files mode enabled, IPC is disabled 00:04:01.857 EAL: Heap on socket 0 was shrunk by 2MB 00:04:01.857 EAL: No shared files mode enabled, IPC is disabled 00:04:01.857 EAL: No shared files mode enabled, IPC is disabled 00:04:01.857 EAL: No shared files mode enabled, IPC is disabled 00:04:01.857 00:04:01.857 real 0m1.494s 00:04:01.857 user 0m0.869s 00:04:01.857 sys 0m0.593s 00:04:01.857 12:04:54 env.env_vtophys -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.857 12:04:54 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:01.857 ************************************ 00:04:01.857 END TEST env_vtophys 00:04:01.857 ************************************ 00:04:01.857 12:04:54 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.857 12:04:54 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:01.857 12:04:54 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.857 12:04:54 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.857 ************************************ 00:04:01.857 START TEST env_pci 00:04:01.857 ************************************ 00:04:01.857 12:04:54 env.env_pci -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:01.857 00:04:01.857 00:04:01.857 CUnit - A unit testing framework for C - Version 2.1-3 00:04:01.857 http://cunit.sourceforge.net/ 00:04:01.857 00:04:01.857 00:04:01.857 Suite: pci 00:04:01.857 Test: pci_hook ...[2024-07-26 12:04:54.968733] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 2750182 has claimed it 00:04:01.857 EAL: Cannot find device (10000:00:01.0) 00:04:01.857 EAL: Failed to attach device on primary process 00:04:01.857 passed 00:04:01.857 00:04:01.857 Run Summary: Type Total Ran Passed Failed Inactive 00:04:01.857 suites 1 1 n/a 0 0 00:04:01.857 tests 1 1 1 0 0 00:04:01.857 asserts 25 25 25 0 n/a 00:04:01.857 00:04:01.857 Elapsed time = 0.021 seconds 00:04:01.857 00:04:01.857 real 0m0.034s 00:04:01.857 user 0m0.010s 00:04:01.857 sys 0m0.024s 00:04:01.857 12:04:54 env.env_pci -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:01.857 12:04:54 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:01.857 ************************************ 00:04:01.857 END TEST env_pci 00:04:01.857 ************************************ 00:04:01.857 12:04:55 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:01.857 12:04:55 env -- env/env.sh@15 -- # uname 00:04:01.857 12:04:55 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:01.857 12:04:55 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:01.857 12:04:55 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.857 12:04:55 env -- common/autotest_common.sh@1101 -- # '[' 5 -le 1 ']' 00:04:01.857 12:04:55 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:01.857 12:04:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:01.857 ************************************ 00:04:01.857 START TEST env_dpdk_post_init 00:04:01.857 ************************************ 00:04:01.857 12:04:55 env.env_dpdk_post_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:01.857 EAL: Detected CPU lcores: 48 00:04:01.857 EAL: Detected NUMA nodes: 2 00:04:01.857 EAL: Detected shared linkage of DPDK 00:04:01.857 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:01.857 EAL: Selected IOVA mode 'VA' 00:04:01.857 EAL: No free 2048 kB hugepages reported on node 1 00:04:01.857 EAL: VFIO support initialized 00:04:01.858 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:02.117 EAL: Using IOMMU type 1 (Type 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:00:04.0 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:00:04.1 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:00:04.2 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:00:04.3 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:00:04.4 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:00:04.5 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:00:04.6 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:00:04.7 (socket 0) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e20) device: 0000:80:04.0 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e21) device: 0000:80:04.1 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e22) device: 0000:80:04.2 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e23) device: 0000:80:04.3 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e24) device: 0000:80:04.4 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e25) device: 0000:80:04.5 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e26) device: 0000:80:04.6 (socket 1) 00:04:02.117 EAL: Probe PCI driver: spdk_ioat (8086:0e27) device: 0000:80:04.7 (socket 1) 00:04:03.057 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:88:00.0 (socket 1) 00:04:06.347 EAL: Releasing PCI mapped resource for 0000:88:00.0 00:04:06.347 EAL: Calling pci_unmap_resource for 0000:88:00.0 at 0x202001040000 00:04:06.347 Starting DPDK initialization... 00:04:06.347 Starting SPDK post initialization... 00:04:06.347 SPDK NVMe probe 00:04:06.347 Attaching to 0000:88:00.0 00:04:06.347 Attached to 0000:88:00.0 00:04:06.347 Cleaning up... 00:04:06.347 00:04:06.347 real 0m4.374s 00:04:06.347 user 0m3.260s 00:04:06.347 sys 0m0.173s 00:04:06.347 12:04:59 env.env_dpdk_post_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.347 12:04:59 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:06.347 ************************************ 00:04:06.347 END TEST env_dpdk_post_init 00:04:06.347 ************************************ 00:04:06.347 12:04:59 env -- env/env.sh@26 -- # uname 00:04:06.347 12:04:59 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:06.347 12:04:59 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.347 12:04:59 env -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.347 12:04:59 env -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.347 12:04:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.347 ************************************ 00:04:06.347 START TEST env_mem_callbacks 00:04:06.347 ************************************ 00:04:06.348 12:04:59 env.env_mem_callbacks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:06.348 EAL: Detected CPU lcores: 48 00:04:06.348 EAL: Detected NUMA nodes: 2 00:04:06.348 EAL: Detected shared linkage of DPDK 00:04:06.348 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:06.348 EAL: Selected IOVA mode 'VA' 00:04:06.348 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.348 EAL: VFIO support initialized 00:04:06.348 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:06.348 00:04:06.348 00:04:06.348 CUnit - A unit testing framework for C - Version 2.1-3 00:04:06.348 http://cunit.sourceforge.net/ 00:04:06.348 00:04:06.348 00:04:06.348 Suite: memory 00:04:06.348 Test: test ... 00:04:06.348 register 0x200000200000 2097152 00:04:06.348 malloc 3145728 00:04:06.348 register 0x200000400000 4194304 00:04:06.348 buf 0x200000500000 len 3145728 PASSED 00:04:06.348 malloc 64 00:04:06.348 buf 0x2000004fff40 len 64 PASSED 00:04:06.348 malloc 4194304 00:04:06.348 register 0x200000800000 6291456 00:04:06.348 buf 0x200000a00000 len 4194304 PASSED 00:04:06.348 free 0x200000500000 3145728 00:04:06.348 free 0x2000004fff40 64 00:04:06.348 unregister 0x200000400000 4194304 PASSED 00:04:06.348 free 0x200000a00000 4194304 00:04:06.348 unregister 0x200000800000 6291456 PASSED 00:04:06.348 malloc 8388608 00:04:06.348 register 0x200000400000 10485760 00:04:06.348 buf 0x200000600000 len 8388608 PASSED 00:04:06.348 free 0x200000600000 8388608 00:04:06.348 unregister 0x200000400000 10485760 PASSED 00:04:06.348 passed 00:04:06.348 00:04:06.348 Run Summary: Type Total Ran Passed Failed Inactive 00:04:06.348 suites 1 1 n/a 0 0 00:04:06.348 tests 1 1 1 0 0 00:04:06.348 asserts 15 15 15 0 n/a 00:04:06.348 00:04:06.348 Elapsed time = 0.005 seconds 00:04:06.348 00:04:06.348 real 0m0.049s 00:04:06.348 user 0m0.012s 00:04:06.348 sys 0m0.037s 00:04:06.348 12:04:59 env.env_mem_callbacks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.348 12:04:59 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:06.348 ************************************ 00:04:06.348 END TEST env_mem_callbacks 00:04:06.348 ************************************ 00:04:06.348 00:04:06.348 real 0m6.383s 00:04:06.348 user 0m4.399s 00:04:06.348 sys 0m1.030s 00:04:06.348 12:04:59 env -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:06.348 12:04:59 env -- common/autotest_common.sh@10 -- # set +x 00:04:06.348 ************************************ 00:04:06.348 END TEST env 00:04:06.348 ************************************ 00:04:06.348 12:04:59 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.348 12:04:59 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.348 12:04:59 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.348 12:04:59 -- common/autotest_common.sh@10 -- # set +x 00:04:06.348 ************************************ 00:04:06.348 START TEST rpc 00:04:06.348 ************************************ 00:04:06.348 12:04:59 rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:06.607 * Looking for test storage... 00:04:06.607 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.607 12:04:59 rpc -- rpc/rpc.sh@65 -- # spdk_pid=2750843 00:04:06.607 12:04:59 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:06.607 12:04:59 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:06.607 12:04:59 rpc -- rpc/rpc.sh@67 -- # waitforlisten 2750843 00:04:06.607 12:04:59 rpc -- common/autotest_common.sh@831 -- # '[' -z 2750843 ']' 00:04:06.607 12:04:59 rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:06.607 12:04:59 rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:06.607 12:04:59 rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:06.607 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:06.607 12:04:59 rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:06.607 12:04:59 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:06.607 [2024-07-26 12:04:59.672181] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:06.607 [2024-07-26 12:04:59.672272] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2750843 ] 00:04:06.608 EAL: No free 2048 kB hugepages reported on node 1 00:04:06.608 [2024-07-26 12:04:59.729809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:06.608 [2024-07-26 12:04:59.842731] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:06.608 [2024-07-26 12:04:59.842795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 2750843' to capture a snapshot of events at runtime. 00:04:06.608 [2024-07-26 12:04:59.842808] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:06.608 [2024-07-26 12:04:59.842820] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:06.608 [2024-07-26 12:04:59.842829] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid2750843 for offline analysis/debug. 00:04:06.608 [2024-07-26 12:04:59.842858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:06.866 12:05:00 rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:06.866 12:05:00 rpc -- common/autotest_common.sh@864 -- # return 0 00:04:06.866 12:05:00 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.866 12:05:00 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:06.867 12:05:00 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:06.867 12:05:00 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:06.867 12:05:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:06.867 12:05:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:06.867 12:05:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 ************************************ 00:04:07.125 START TEST rpc_integrity 00:04:07.125 ************************************ 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.125 { 00:04:07.125 "name": "Malloc0", 00:04:07.125 "aliases": [ 00:04:07.125 "368ccdc8-843d-4a8d-b05e-f45addf2de7e" 00:04:07.125 ], 00:04:07.125 "product_name": "Malloc disk", 00:04:07.125 "block_size": 512, 00:04:07.125 "num_blocks": 16384, 00:04:07.125 "uuid": "368ccdc8-843d-4a8d-b05e-f45addf2de7e", 00:04:07.125 "assigned_rate_limits": { 00:04:07.125 "rw_ios_per_sec": 0, 00:04:07.125 "rw_mbytes_per_sec": 0, 00:04:07.125 "r_mbytes_per_sec": 0, 00:04:07.125 "w_mbytes_per_sec": 0 00:04:07.125 }, 00:04:07.125 "claimed": false, 00:04:07.125 "zoned": false, 00:04:07.125 "supported_io_types": { 00:04:07.125 "read": true, 00:04:07.125 "write": true, 00:04:07.125 "unmap": true, 00:04:07.125 "flush": true, 00:04:07.125 "reset": true, 00:04:07.125 "nvme_admin": false, 00:04:07.125 "nvme_io": false, 00:04:07.125 "nvme_io_md": false, 00:04:07.125 "write_zeroes": true, 00:04:07.125 "zcopy": true, 00:04:07.125 "get_zone_info": false, 00:04:07.125 "zone_management": false, 00:04:07.125 "zone_append": false, 00:04:07.125 "compare": false, 00:04:07.125 "compare_and_write": false, 00:04:07.125 "abort": true, 00:04:07.125 "seek_hole": false, 00:04:07.125 "seek_data": false, 00:04:07.125 "copy": true, 00:04:07.125 "nvme_iov_md": false 00:04:07.125 }, 00:04:07.125 "memory_domains": [ 00:04:07.125 { 00:04:07.125 "dma_device_id": "system", 00:04:07.125 "dma_device_type": 1 00:04:07.125 }, 00:04:07.125 { 00:04:07.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.125 "dma_device_type": 2 00:04:07.125 } 00:04:07.125 ], 00:04:07.125 "driver_specific": {} 00:04:07.125 } 00:04:07.125 ]' 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 [2024-07-26 12:05:00.245457] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:07.125 [2024-07-26 12:05:00.245507] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.125 [2024-07-26 12:05:00.245532] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed5d50 00:04:07.125 [2024-07-26 12:05:00.245547] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.125 [2024-07-26 12:05:00.247252] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.125 [2024-07-26 12:05:00.247278] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.125 Passthru0 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.125 { 00:04:07.125 "name": "Malloc0", 00:04:07.125 "aliases": [ 00:04:07.125 "368ccdc8-843d-4a8d-b05e-f45addf2de7e" 00:04:07.125 ], 00:04:07.125 "product_name": "Malloc disk", 00:04:07.125 "block_size": 512, 00:04:07.125 "num_blocks": 16384, 00:04:07.125 "uuid": "368ccdc8-843d-4a8d-b05e-f45addf2de7e", 00:04:07.125 "assigned_rate_limits": { 00:04:07.125 "rw_ios_per_sec": 0, 00:04:07.125 "rw_mbytes_per_sec": 0, 00:04:07.125 "r_mbytes_per_sec": 0, 00:04:07.125 "w_mbytes_per_sec": 0 00:04:07.125 }, 00:04:07.125 "claimed": true, 00:04:07.125 "claim_type": "exclusive_write", 00:04:07.125 "zoned": false, 00:04:07.125 "supported_io_types": { 00:04:07.125 "read": true, 00:04:07.125 "write": true, 00:04:07.125 "unmap": true, 00:04:07.125 "flush": true, 00:04:07.125 "reset": true, 00:04:07.125 "nvme_admin": false, 00:04:07.125 "nvme_io": false, 00:04:07.125 "nvme_io_md": false, 00:04:07.125 "write_zeroes": true, 00:04:07.125 "zcopy": true, 00:04:07.125 "get_zone_info": false, 00:04:07.125 "zone_management": false, 00:04:07.125 "zone_append": false, 00:04:07.125 "compare": false, 00:04:07.125 "compare_and_write": false, 00:04:07.125 "abort": true, 00:04:07.125 "seek_hole": false, 00:04:07.125 "seek_data": false, 00:04:07.125 "copy": true, 00:04:07.125 "nvme_iov_md": false 00:04:07.125 }, 00:04:07.125 "memory_domains": [ 00:04:07.125 { 00:04:07.125 "dma_device_id": "system", 00:04:07.125 "dma_device_type": 1 00:04:07.125 }, 00:04:07.125 { 00:04:07.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.125 "dma_device_type": 2 00:04:07.125 } 00:04:07.125 ], 00:04:07.125 "driver_specific": {} 00:04:07.125 }, 00:04:07.125 { 00:04:07.125 "name": "Passthru0", 00:04:07.125 "aliases": [ 00:04:07.125 "e24dc545-ecf0-57a5-bc17-fbc8698a49d6" 00:04:07.125 ], 00:04:07.125 "product_name": "passthru", 00:04:07.125 "block_size": 512, 00:04:07.125 "num_blocks": 16384, 00:04:07.125 "uuid": "e24dc545-ecf0-57a5-bc17-fbc8698a49d6", 00:04:07.125 "assigned_rate_limits": { 00:04:07.125 "rw_ios_per_sec": 0, 00:04:07.125 "rw_mbytes_per_sec": 0, 00:04:07.125 "r_mbytes_per_sec": 0, 00:04:07.125 "w_mbytes_per_sec": 0 00:04:07.125 }, 00:04:07.125 "claimed": false, 00:04:07.125 "zoned": false, 00:04:07.125 "supported_io_types": { 00:04:07.125 "read": true, 00:04:07.125 "write": true, 00:04:07.125 "unmap": true, 00:04:07.125 "flush": true, 00:04:07.125 "reset": true, 00:04:07.125 "nvme_admin": false, 00:04:07.125 "nvme_io": false, 00:04:07.125 "nvme_io_md": false, 00:04:07.125 "write_zeroes": true, 00:04:07.125 "zcopy": true, 00:04:07.125 "get_zone_info": false, 00:04:07.125 "zone_management": false, 00:04:07.125 "zone_append": false, 00:04:07.125 "compare": false, 00:04:07.125 "compare_and_write": false, 00:04:07.125 "abort": true, 00:04:07.125 "seek_hole": false, 00:04:07.125 "seek_data": false, 00:04:07.125 "copy": true, 00:04:07.125 "nvme_iov_md": false 00:04:07.125 }, 00:04:07.125 "memory_domains": [ 00:04:07.125 { 00:04:07.125 "dma_device_id": "system", 00:04:07.125 "dma_device_type": 1 00:04:07.125 }, 00:04:07.125 { 00:04:07.125 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.125 "dma_device_type": 2 00:04:07.125 } 00:04:07.125 ], 00:04:07.125 "driver_specific": { 00:04:07.125 "passthru": { 00:04:07.125 "name": "Passthru0", 00:04:07.125 "base_bdev_name": "Malloc0" 00:04:07.125 } 00:04:07.125 } 00:04:07.125 } 00:04:07.125 ]' 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.125 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:07.125 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.126 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.126 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.126 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.126 12:05:00 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.126 00:04:07.126 real 0m0.234s 00:04:07.126 user 0m0.154s 00:04:07.126 sys 0m0.021s 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.126 12:05:00 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.126 ************************************ 00:04:07.126 END TEST rpc_integrity 00:04:07.126 ************************************ 00:04:07.384 12:05:00 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:07.384 12:05:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.384 12:05:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.384 12:05:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 ************************************ 00:04:07.384 START TEST rpc_plugins 00:04:07.384 ************************************ 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@1125 -- # rpc_plugins 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:07.384 { 00:04:07.384 "name": "Malloc1", 00:04:07.384 "aliases": [ 00:04:07.384 "432fc010-ebc0-450e-99d2-82acbb2588f0" 00:04:07.384 ], 00:04:07.384 "product_name": "Malloc disk", 00:04:07.384 "block_size": 4096, 00:04:07.384 "num_blocks": 256, 00:04:07.384 "uuid": "432fc010-ebc0-450e-99d2-82acbb2588f0", 00:04:07.384 "assigned_rate_limits": { 00:04:07.384 "rw_ios_per_sec": 0, 00:04:07.384 "rw_mbytes_per_sec": 0, 00:04:07.384 "r_mbytes_per_sec": 0, 00:04:07.384 "w_mbytes_per_sec": 0 00:04:07.384 }, 00:04:07.384 "claimed": false, 00:04:07.384 "zoned": false, 00:04:07.384 "supported_io_types": { 00:04:07.384 "read": true, 00:04:07.384 "write": true, 00:04:07.384 "unmap": true, 00:04:07.384 "flush": true, 00:04:07.384 "reset": true, 00:04:07.384 "nvme_admin": false, 00:04:07.384 "nvme_io": false, 00:04:07.384 "nvme_io_md": false, 00:04:07.384 "write_zeroes": true, 00:04:07.384 "zcopy": true, 00:04:07.384 "get_zone_info": false, 00:04:07.384 "zone_management": false, 00:04:07.384 "zone_append": false, 00:04:07.384 "compare": false, 00:04:07.384 "compare_and_write": false, 00:04:07.384 "abort": true, 00:04:07.384 "seek_hole": false, 00:04:07.384 "seek_data": false, 00:04:07.384 "copy": true, 00:04:07.384 "nvme_iov_md": false 00:04:07.384 }, 00:04:07.384 "memory_domains": [ 00:04:07.384 { 00:04:07.384 "dma_device_id": "system", 00:04:07.384 "dma_device_type": 1 00:04:07.384 }, 00:04:07.384 { 00:04:07.384 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.384 "dma_device_type": 2 00:04:07.384 } 00:04:07.384 ], 00:04:07.384 "driver_specific": {} 00:04:07.384 } 00:04:07.384 ]' 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:07.384 12:05:00 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:07.384 00:04:07.384 real 0m0.115s 00:04:07.384 user 0m0.074s 00:04:07.384 sys 0m0.011s 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.384 12:05:00 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 ************************************ 00:04:07.384 END TEST rpc_plugins 00:04:07.384 ************************************ 00:04:07.384 12:05:00 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:07.384 12:05:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.384 12:05:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.384 12:05:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 ************************************ 00:04:07.384 START TEST rpc_trace_cmd_test 00:04:07.384 ************************************ 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1125 -- # rpc_trace_cmd_test 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:07.384 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid2750843", 00:04:07.384 "tpoint_group_mask": "0x8", 00:04:07.384 "iscsi_conn": { 00:04:07.384 "mask": "0x2", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "scsi": { 00:04:07.384 "mask": "0x4", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "bdev": { 00:04:07.384 "mask": "0x8", 00:04:07.384 "tpoint_mask": "0xffffffffffffffff" 00:04:07.384 }, 00:04:07.384 "nvmf_rdma": { 00:04:07.384 "mask": "0x10", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "nvmf_tcp": { 00:04:07.384 "mask": "0x20", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "ftl": { 00:04:07.384 "mask": "0x40", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "blobfs": { 00:04:07.384 "mask": "0x80", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "dsa": { 00:04:07.384 "mask": "0x200", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "thread": { 00:04:07.384 "mask": "0x400", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "nvme_pcie": { 00:04:07.384 "mask": "0x800", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "iaa": { 00:04:07.384 "mask": "0x1000", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "nvme_tcp": { 00:04:07.384 "mask": "0x2000", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "bdev_nvme": { 00:04:07.384 "mask": "0x4000", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 }, 00:04:07.384 "sock": { 00:04:07.384 "mask": "0x8000", 00:04:07.384 "tpoint_mask": "0x0" 00:04:07.384 } 00:04:07.384 }' 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:07.384 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:07.644 00:04:07.644 real 0m0.199s 00:04:07.644 user 0m0.179s 00:04:07.644 sys 0m0.012s 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.644 12:05:00 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:07.644 ************************************ 00:04:07.644 END TEST rpc_trace_cmd_test 00:04:07.644 ************************************ 00:04:07.644 12:05:00 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:07.644 12:05:00 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:07.644 12:05:00 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:07.644 12:05:00 rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:07.644 12:05:00 rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:07.644 12:05:00 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:07.644 ************************************ 00:04:07.645 START TEST rpc_daemon_integrity 00:04:07.645 ************************************ 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1125 -- # rpc_integrity 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:07.645 { 00:04:07.645 "name": "Malloc2", 00:04:07.645 "aliases": [ 00:04:07.645 "fe0c7ac6-dd63-49d6-832e-4c0e26f89210" 00:04:07.645 ], 00:04:07.645 "product_name": "Malloc disk", 00:04:07.645 "block_size": 512, 00:04:07.645 "num_blocks": 16384, 00:04:07.645 "uuid": "fe0c7ac6-dd63-49d6-832e-4c0e26f89210", 00:04:07.645 "assigned_rate_limits": { 00:04:07.645 "rw_ios_per_sec": 0, 00:04:07.645 "rw_mbytes_per_sec": 0, 00:04:07.645 "r_mbytes_per_sec": 0, 00:04:07.645 "w_mbytes_per_sec": 0 00:04:07.645 }, 00:04:07.645 "claimed": false, 00:04:07.645 "zoned": false, 00:04:07.645 "supported_io_types": { 00:04:07.645 "read": true, 00:04:07.645 "write": true, 00:04:07.645 "unmap": true, 00:04:07.645 "flush": true, 00:04:07.645 "reset": true, 00:04:07.645 "nvme_admin": false, 00:04:07.645 "nvme_io": false, 00:04:07.645 "nvme_io_md": false, 00:04:07.645 "write_zeroes": true, 00:04:07.645 "zcopy": true, 00:04:07.645 "get_zone_info": false, 00:04:07.645 "zone_management": false, 00:04:07.645 "zone_append": false, 00:04:07.645 "compare": false, 00:04:07.645 "compare_and_write": false, 00:04:07.645 "abort": true, 00:04:07.645 "seek_hole": false, 00:04:07.645 "seek_data": false, 00:04:07.645 "copy": true, 00:04:07.645 "nvme_iov_md": false 00:04:07.645 }, 00:04:07.645 "memory_domains": [ 00:04:07.645 { 00:04:07.645 "dma_device_id": "system", 00:04:07.645 "dma_device_type": 1 00:04:07.645 }, 00:04:07.645 { 00:04:07.645 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.645 "dma_device_type": 2 00:04:07.645 } 00:04:07.645 ], 00:04:07.645 "driver_specific": {} 00:04:07.645 } 00:04:07.645 ]' 00:04:07.645 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.905 [2024-07-26 12:05:00.931533] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:07.905 [2024-07-26 12:05:00.931583] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:07.905 [2024-07-26 12:05:00.931616] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x1ed6c00 00:04:07.905 [2024-07-26 12:05:00.931632] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:07.905 [2024-07-26 12:05:00.932983] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:07.905 [2024-07-26 12:05:00.933013] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:07.905 Passthru0 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.905 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:07.905 { 00:04:07.905 "name": "Malloc2", 00:04:07.905 "aliases": [ 00:04:07.905 "fe0c7ac6-dd63-49d6-832e-4c0e26f89210" 00:04:07.905 ], 00:04:07.905 "product_name": "Malloc disk", 00:04:07.905 "block_size": 512, 00:04:07.905 "num_blocks": 16384, 00:04:07.905 "uuid": "fe0c7ac6-dd63-49d6-832e-4c0e26f89210", 00:04:07.905 "assigned_rate_limits": { 00:04:07.905 "rw_ios_per_sec": 0, 00:04:07.905 "rw_mbytes_per_sec": 0, 00:04:07.905 "r_mbytes_per_sec": 0, 00:04:07.905 "w_mbytes_per_sec": 0 00:04:07.905 }, 00:04:07.905 "claimed": true, 00:04:07.905 "claim_type": "exclusive_write", 00:04:07.905 "zoned": false, 00:04:07.905 "supported_io_types": { 00:04:07.905 "read": true, 00:04:07.905 "write": true, 00:04:07.905 "unmap": true, 00:04:07.905 "flush": true, 00:04:07.905 "reset": true, 00:04:07.905 "nvme_admin": false, 00:04:07.905 "nvme_io": false, 00:04:07.905 "nvme_io_md": false, 00:04:07.905 "write_zeroes": true, 00:04:07.905 "zcopy": true, 00:04:07.905 "get_zone_info": false, 00:04:07.905 "zone_management": false, 00:04:07.905 "zone_append": false, 00:04:07.905 "compare": false, 00:04:07.905 "compare_and_write": false, 00:04:07.905 "abort": true, 00:04:07.905 "seek_hole": false, 00:04:07.905 "seek_data": false, 00:04:07.905 "copy": true, 00:04:07.905 "nvme_iov_md": false 00:04:07.905 }, 00:04:07.905 "memory_domains": [ 00:04:07.905 { 00:04:07.905 "dma_device_id": "system", 00:04:07.905 "dma_device_type": 1 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.905 "dma_device_type": 2 00:04:07.905 } 00:04:07.905 ], 00:04:07.905 "driver_specific": {} 00:04:07.905 }, 00:04:07.905 { 00:04:07.905 "name": "Passthru0", 00:04:07.905 "aliases": [ 00:04:07.905 "35a88be2-a6c0-5f37-9117-5d39cc0b171c" 00:04:07.905 ], 00:04:07.905 "product_name": "passthru", 00:04:07.906 "block_size": 512, 00:04:07.906 "num_blocks": 16384, 00:04:07.906 "uuid": "35a88be2-a6c0-5f37-9117-5d39cc0b171c", 00:04:07.906 "assigned_rate_limits": { 00:04:07.906 "rw_ios_per_sec": 0, 00:04:07.906 "rw_mbytes_per_sec": 0, 00:04:07.906 "r_mbytes_per_sec": 0, 00:04:07.906 "w_mbytes_per_sec": 0 00:04:07.906 }, 00:04:07.906 "claimed": false, 00:04:07.906 "zoned": false, 00:04:07.906 "supported_io_types": { 00:04:07.906 "read": true, 00:04:07.906 "write": true, 00:04:07.906 "unmap": true, 00:04:07.906 "flush": true, 00:04:07.906 "reset": true, 00:04:07.906 "nvme_admin": false, 00:04:07.906 "nvme_io": false, 00:04:07.906 "nvme_io_md": false, 00:04:07.906 "write_zeroes": true, 00:04:07.906 "zcopy": true, 00:04:07.906 "get_zone_info": false, 00:04:07.906 "zone_management": false, 00:04:07.906 "zone_append": false, 00:04:07.906 "compare": false, 00:04:07.906 "compare_and_write": false, 00:04:07.906 "abort": true, 00:04:07.906 "seek_hole": false, 00:04:07.906 "seek_data": false, 00:04:07.906 "copy": true, 00:04:07.906 "nvme_iov_md": false 00:04:07.906 }, 00:04:07.906 "memory_domains": [ 00:04:07.906 { 00:04:07.906 "dma_device_id": "system", 00:04:07.906 "dma_device_type": 1 00:04:07.906 }, 00:04:07.906 { 00:04:07.906 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:07.906 "dma_device_type": 2 00:04:07.906 } 00:04:07.906 ], 00:04:07.906 "driver_specific": { 00:04:07.906 "passthru": { 00:04:07.906 "name": "Passthru0", 00:04:07.906 "base_bdev_name": "Malloc2" 00:04:07.906 } 00:04:07.906 } 00:04:07.906 } 00:04:07.906 ]' 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.906 12:05:00 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:07.906 00:04:07.906 real 0m0.231s 00:04:07.906 user 0m0.150s 00:04:07.906 sys 0m0.022s 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:07.906 12:05:01 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:07.906 ************************************ 00:04:07.906 END TEST rpc_daemon_integrity 00:04:07.906 ************************************ 00:04:07.906 12:05:01 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:07.906 12:05:01 rpc -- rpc/rpc.sh@84 -- # killprocess 2750843 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@950 -- # '[' -z 2750843 ']' 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@954 -- # kill -0 2750843 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@955 -- # uname 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2750843 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2750843' 00:04:07.906 killing process with pid 2750843 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@969 -- # kill 2750843 00:04:07.906 12:05:01 rpc -- common/autotest_common.sh@974 -- # wait 2750843 00:04:08.472 00:04:08.472 real 0m1.990s 00:04:08.472 user 0m2.491s 00:04:08.472 sys 0m0.589s 00:04:08.472 12:05:01 rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:08.472 12:05:01 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.472 ************************************ 00:04:08.472 END TEST rpc 00:04:08.472 ************************************ 00:04:08.472 12:05:01 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:08.472 12:05:01 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.472 12:05:01 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.472 12:05:01 -- common/autotest_common.sh@10 -- # set +x 00:04:08.472 ************************************ 00:04:08.472 START TEST skip_rpc 00:04:08.472 ************************************ 00:04:08.472 12:05:01 skip_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:08.472 * Looking for test storage... 00:04:08.472 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:08.472 12:05:01 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:08.472 12:05:01 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:08.472 12:05:01 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:08.472 12:05:01 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:08.472 12:05:01 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:08.472 12:05:01 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:08.472 ************************************ 00:04:08.472 START TEST skip_rpc 00:04:08.472 ************************************ 00:04:08.472 12:05:01 skip_rpc.skip_rpc -- common/autotest_common.sh@1125 -- # test_skip_rpc 00:04:08.472 12:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=2751276 00:04:08.472 12:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:08.472 12:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:08.472 12:05:01 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:08.732 [2024-07-26 12:05:01.738612] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:08.732 [2024-07-26 12:05:01.738690] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751276 ] 00:04:08.732 EAL: No free 2048 kB hugepages reported on node 1 00:04:08.732 [2024-07-26 12:05:01.799595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:08.732 [2024-07-26 12:05:01.917581] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # local es=0 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # rpc_cmd spdk_get_version 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@653 -- # es=1 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 2751276 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@950 -- # '[' -z 2751276 ']' 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # kill -0 2751276 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # uname 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2751276 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2751276' 00:04:14.034 killing process with pid 2751276 00:04:14.034 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@969 -- # kill 2751276 00:04:14.035 12:05:06 skip_rpc.skip_rpc -- common/autotest_common.sh@974 -- # wait 2751276 00:04:14.035 00:04:14.035 real 0m5.494s 00:04:14.035 user 0m5.167s 00:04:14.035 sys 0m0.330s 00:04:14.035 12:05:07 skip_rpc.skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:14.035 12:05:07 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 ************************************ 00:04:14.035 END TEST skip_rpc 00:04:14.035 ************************************ 00:04:14.035 12:05:07 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:14.035 12:05:07 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:14.035 12:05:07 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:14.035 12:05:07 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 ************************************ 00:04:14.035 START TEST skip_rpc_with_json 00:04:14.035 ************************************ 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_json 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=2751964 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 2751964 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@831 -- # '[' -z 2751964 ']' 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:14.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:14.035 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.035 [2024-07-26 12:05:07.278544] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:14.035 [2024-07-26 12:05:07.278638] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2751964 ] 00:04:14.294 EAL: No free 2048 kB hugepages reported on node 1 00:04:14.294 [2024-07-26 12:05:07.336875] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:14.294 [2024-07-26 12:05:07.445566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # return 0 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.553 [2024-07-26 12:05:07.705327] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:14.553 request: 00:04:14.553 { 00:04:14.553 "trtype": "tcp", 00:04:14.553 "method": "nvmf_get_transports", 00:04:14.553 "req_id": 1 00:04:14.553 } 00:04:14.553 Got JSON-RPC error response 00:04:14.553 response: 00:04:14.553 { 00:04:14.553 "code": -19, 00:04:14.553 "message": "No such device" 00:04:14.553 } 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.553 [2024-07-26 12:05:07.713472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:14.553 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:14.813 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:14.813 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:14.813 { 00:04:14.813 "subsystems": [ 00:04:14.813 { 00:04:14.813 "subsystem": "vfio_user_target", 00:04:14.813 "config": null 00:04:14.813 }, 00:04:14.813 { 00:04:14.813 "subsystem": "keyring", 00:04:14.813 "config": [] 00:04:14.813 }, 00:04:14.813 { 00:04:14.813 "subsystem": "iobuf", 00:04:14.813 "config": [ 00:04:14.813 { 00:04:14.813 "method": "iobuf_set_options", 00:04:14.813 "params": { 00:04:14.814 "small_pool_count": 8192, 00:04:14.814 "large_pool_count": 1024, 00:04:14.814 "small_bufsize": 8192, 00:04:14.814 "large_bufsize": 135168 00:04:14.814 } 00:04:14.814 } 00:04:14.814 ] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "sock", 00:04:14.814 "config": [ 00:04:14.814 { 00:04:14.814 "method": "sock_set_default_impl", 00:04:14.814 "params": { 00:04:14.814 "impl_name": "posix" 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "sock_impl_set_options", 00:04:14.814 "params": { 00:04:14.814 "impl_name": "ssl", 00:04:14.814 "recv_buf_size": 4096, 00:04:14.814 "send_buf_size": 4096, 00:04:14.814 "enable_recv_pipe": true, 00:04:14.814 "enable_quickack": false, 00:04:14.814 "enable_placement_id": 0, 00:04:14.814 "enable_zerocopy_send_server": true, 00:04:14.814 "enable_zerocopy_send_client": false, 00:04:14.814 "zerocopy_threshold": 0, 00:04:14.814 "tls_version": 0, 00:04:14.814 "enable_ktls": false 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "sock_impl_set_options", 00:04:14.814 "params": { 00:04:14.814 "impl_name": "posix", 00:04:14.814 "recv_buf_size": 2097152, 00:04:14.814 "send_buf_size": 2097152, 00:04:14.814 "enable_recv_pipe": true, 00:04:14.814 "enable_quickack": false, 00:04:14.814 "enable_placement_id": 0, 00:04:14.814 "enable_zerocopy_send_server": true, 00:04:14.814 "enable_zerocopy_send_client": false, 00:04:14.814 "zerocopy_threshold": 0, 00:04:14.814 "tls_version": 0, 00:04:14.814 "enable_ktls": false 00:04:14.814 } 00:04:14.814 } 00:04:14.814 ] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "vmd", 00:04:14.814 "config": [] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "accel", 00:04:14.814 "config": [ 00:04:14.814 { 00:04:14.814 "method": "accel_set_options", 00:04:14.814 "params": { 00:04:14.814 "small_cache_size": 128, 00:04:14.814 "large_cache_size": 16, 00:04:14.814 "task_count": 2048, 00:04:14.814 "sequence_count": 2048, 00:04:14.814 "buf_count": 2048 00:04:14.814 } 00:04:14.814 } 00:04:14.814 ] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "bdev", 00:04:14.814 "config": [ 00:04:14.814 { 00:04:14.814 "method": "bdev_set_options", 00:04:14.814 "params": { 00:04:14.814 "bdev_io_pool_size": 65535, 00:04:14.814 "bdev_io_cache_size": 256, 00:04:14.814 "bdev_auto_examine": true, 00:04:14.814 "iobuf_small_cache_size": 128, 00:04:14.814 "iobuf_large_cache_size": 16 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "bdev_raid_set_options", 00:04:14.814 "params": { 00:04:14.814 "process_window_size_kb": 1024, 00:04:14.814 "process_max_bandwidth_mb_sec": 0 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "bdev_iscsi_set_options", 00:04:14.814 "params": { 00:04:14.814 "timeout_sec": 30 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "bdev_nvme_set_options", 00:04:14.814 "params": { 00:04:14.814 "action_on_timeout": "none", 00:04:14.814 "timeout_us": 0, 00:04:14.814 "timeout_admin_us": 0, 00:04:14.814 "keep_alive_timeout_ms": 10000, 00:04:14.814 "arbitration_burst": 0, 00:04:14.814 "low_priority_weight": 0, 00:04:14.814 "medium_priority_weight": 0, 00:04:14.814 "high_priority_weight": 0, 00:04:14.814 "nvme_adminq_poll_period_us": 10000, 00:04:14.814 "nvme_ioq_poll_period_us": 0, 00:04:14.814 "io_queue_requests": 0, 00:04:14.814 "delay_cmd_submit": true, 00:04:14.814 "transport_retry_count": 4, 00:04:14.814 "bdev_retry_count": 3, 00:04:14.814 "transport_ack_timeout": 0, 00:04:14.814 "ctrlr_loss_timeout_sec": 0, 00:04:14.814 "reconnect_delay_sec": 0, 00:04:14.814 "fast_io_fail_timeout_sec": 0, 00:04:14.814 "disable_auto_failback": false, 00:04:14.814 "generate_uuids": false, 00:04:14.814 "transport_tos": 0, 00:04:14.814 "nvme_error_stat": false, 00:04:14.814 "rdma_srq_size": 0, 00:04:14.814 "io_path_stat": false, 00:04:14.814 "allow_accel_sequence": false, 00:04:14.814 "rdma_max_cq_size": 0, 00:04:14.814 "rdma_cm_event_timeout_ms": 0, 00:04:14.814 "dhchap_digests": [ 00:04:14.814 "sha256", 00:04:14.814 "sha384", 00:04:14.814 "sha512" 00:04:14.814 ], 00:04:14.814 "dhchap_dhgroups": [ 00:04:14.814 "null", 00:04:14.814 "ffdhe2048", 00:04:14.814 "ffdhe3072", 00:04:14.814 "ffdhe4096", 00:04:14.814 "ffdhe6144", 00:04:14.814 "ffdhe8192" 00:04:14.814 ] 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "bdev_nvme_set_hotplug", 00:04:14.814 "params": { 00:04:14.814 "period_us": 100000, 00:04:14.814 "enable": false 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "bdev_wait_for_examine" 00:04:14.814 } 00:04:14.814 ] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "scsi", 00:04:14.814 "config": null 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "scheduler", 00:04:14.814 "config": [ 00:04:14.814 { 00:04:14.814 "method": "framework_set_scheduler", 00:04:14.814 "params": { 00:04:14.814 "name": "static" 00:04:14.814 } 00:04:14.814 } 00:04:14.814 ] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "vhost_scsi", 00:04:14.814 "config": [] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "vhost_blk", 00:04:14.814 "config": [] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "ublk", 00:04:14.814 "config": [] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "nbd", 00:04:14.814 "config": [] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "nvmf", 00:04:14.814 "config": [ 00:04:14.814 { 00:04:14.814 "method": "nvmf_set_config", 00:04:14.814 "params": { 00:04:14.814 "discovery_filter": "match_any", 00:04:14.814 "admin_cmd_passthru": { 00:04:14.814 "identify_ctrlr": false 00:04:14.814 } 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "nvmf_set_max_subsystems", 00:04:14.814 "params": { 00:04:14.814 "max_subsystems": 1024 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "nvmf_set_crdt", 00:04:14.814 "params": { 00:04:14.814 "crdt1": 0, 00:04:14.814 "crdt2": 0, 00:04:14.814 "crdt3": 0 00:04:14.814 } 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "method": "nvmf_create_transport", 00:04:14.814 "params": { 00:04:14.814 "trtype": "TCP", 00:04:14.814 "max_queue_depth": 128, 00:04:14.814 "max_io_qpairs_per_ctrlr": 127, 00:04:14.814 "in_capsule_data_size": 4096, 00:04:14.814 "max_io_size": 131072, 00:04:14.814 "io_unit_size": 131072, 00:04:14.814 "max_aq_depth": 128, 00:04:14.814 "num_shared_buffers": 511, 00:04:14.814 "buf_cache_size": 4294967295, 00:04:14.814 "dif_insert_or_strip": false, 00:04:14.814 "zcopy": false, 00:04:14.814 "c2h_success": true, 00:04:14.814 "sock_priority": 0, 00:04:14.814 "abort_timeout_sec": 1, 00:04:14.814 "ack_timeout": 0, 00:04:14.814 "data_wr_pool_size": 0 00:04:14.814 } 00:04:14.814 } 00:04:14.814 ] 00:04:14.814 }, 00:04:14.814 { 00:04:14.814 "subsystem": "iscsi", 00:04:14.814 "config": [ 00:04:14.814 { 00:04:14.814 "method": "iscsi_set_options", 00:04:14.814 "params": { 00:04:14.814 "node_base": "iqn.2016-06.io.spdk", 00:04:14.814 "max_sessions": 128, 00:04:14.814 "max_connections_per_session": 2, 00:04:14.814 "max_queue_depth": 64, 00:04:14.814 "default_time2wait": 2, 00:04:14.814 "default_time2retain": 20, 00:04:14.814 "first_burst_length": 8192, 00:04:14.814 "immediate_data": true, 00:04:14.814 "allow_duplicated_isid": false, 00:04:14.814 "error_recovery_level": 0, 00:04:14.814 "nop_timeout": 60, 00:04:14.814 "nop_in_interval": 30, 00:04:14.814 "disable_chap": false, 00:04:14.814 "require_chap": false, 00:04:14.814 "mutual_chap": false, 00:04:14.814 "chap_group": 0, 00:04:14.814 "max_large_datain_per_connection": 64, 00:04:14.815 "max_r2t_per_connection": 4, 00:04:14.815 "pdu_pool_size": 36864, 00:04:14.815 "immediate_data_pool_size": 16384, 00:04:14.815 "data_out_pool_size": 2048 00:04:14.815 } 00:04:14.815 } 00:04:14.815 ] 00:04:14.815 } 00:04:14.815 ] 00:04:14.815 } 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 2751964 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2751964 ']' 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2751964 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2751964 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2751964' 00:04:14.815 killing process with pid 2751964 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2751964 00:04:14.815 12:05:07 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2751964 00:04:15.382 12:05:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=2752114 00:04:15.382 12:05:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:15.382 12:05:08 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 2752114 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@950 -- # '[' -z 2752114 ']' 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # kill -0 2752114 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # uname 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2752114 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2752114' 00:04:20.650 killing process with pid 2752114 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@969 -- # kill 2752114 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@974 -- # wait 2752114 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:20.650 00:04:20.650 real 0m6.639s 00:04:20.650 user 0m6.243s 00:04:20.650 sys 0m0.684s 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.650 12:05:13 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:20.650 ************************************ 00:04:20.650 END TEST skip_rpc_with_json 00:04:20.650 ************************************ 00:04:20.650 12:05:13 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:20.650 12:05:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.650 12:05:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.650 12:05:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 ************************************ 00:04:20.909 START TEST skip_rpc_with_delay 00:04:20.909 ************************************ 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1125 -- # test_skip_rpc_with_delay 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # local es=0 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:20.909 [2024-07-26 12:05:13.960481] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:20.909 [2024-07-26 12:05:13.960594] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@653 -- # es=1 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:20.909 00:04:20.909 real 0m0.066s 00:04:20.909 user 0m0.040s 00:04:20.909 sys 0m0.025s 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:20.909 12:05:13 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 ************************************ 00:04:20.909 END TEST skip_rpc_with_delay 00:04:20.909 ************************************ 00:04:20.909 12:05:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:20.909 12:05:13 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:20.909 12:05:13 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:20.909 12:05:13 skip_rpc -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:20.909 12:05:13 skip_rpc -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:20.909 12:05:13 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:20.909 ************************************ 00:04:20.910 START TEST exit_on_failed_rpc_init 00:04:20.910 ************************************ 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1125 -- # test_exit_on_failed_rpc_init 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=2752825 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 2752825 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@831 -- # '[' -z 2752825 ']' 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:20.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:20.910 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:20.910 [2024-07-26 12:05:14.067502] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:20.910 [2024-07-26 12:05:14.067605] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752825 ] 00:04:20.910 EAL: No free 2048 kB hugepages reported on node 1 00:04:20.910 [2024-07-26 12:05:14.127269] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.169 [2024-07-26 12:05:14.238170] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # return 0 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # local es=0 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.428 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.429 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:04:21.429 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:21.429 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:21.429 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:21.429 [2024-07-26 12:05:14.548731] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:21.429 [2024-07-26 12:05:14.548827] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2752841 ] 00:04:21.429 EAL: No free 2048 kB hugepages reported on node 1 00:04:21.429 [2024-07-26 12:05:14.609491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:21.688 [2024-07-26 12:05:14.729391] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:21.688 [2024-07-26 12:05:14.729510] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:21.688 [2024-07-26 12:05:14.729532] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:21.688 [2024-07-26 12:05:14.729545] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:21.688 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@653 -- # es=234 00:04:21.688 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:04:21.688 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@662 -- # es=106 00:04:21.688 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # case "$es" in 00:04:21.688 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@670 -- # es=1 00:04:21.688 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 2752825 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@950 -- # '[' -z 2752825 ']' 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # kill -0 2752825 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # uname 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2752825 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2752825' 00:04:21.689 killing process with pid 2752825 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@969 -- # kill 2752825 00:04:21.689 12:05:14 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@974 -- # wait 2752825 00:04:22.254 00:04:22.254 real 0m1.326s 00:04:22.254 user 0m1.493s 00:04:22.254 sys 0m0.452s 00:04:22.254 12:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.254 12:05:15 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:22.254 ************************************ 00:04:22.254 END TEST exit_on_failed_rpc_init 00:04:22.254 ************************************ 00:04:22.254 12:05:15 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:22.254 00:04:22.254 real 0m13.753s 00:04:22.254 user 0m13.032s 00:04:22.254 sys 0m1.646s 00:04:22.254 12:05:15 skip_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.254 12:05:15 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.254 ************************************ 00:04:22.254 END TEST skip_rpc 00:04:22.254 ************************************ 00:04:22.254 12:05:15 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:22.254 12:05:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.254 12:05:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.254 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.254 ************************************ 00:04:22.254 START TEST rpc_client 00:04:22.254 ************************************ 00:04:22.254 12:05:15 rpc_client -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:22.254 * Looking for test storage... 00:04:22.254 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:22.254 12:05:15 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:22.254 OK 00:04:22.254 12:05:15 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:22.254 00:04:22.254 real 0m0.066s 00:04:22.254 user 0m0.027s 00:04:22.254 sys 0m0.043s 00:04:22.254 12:05:15 rpc_client -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:22.254 12:05:15 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:22.254 ************************************ 00:04:22.254 END TEST rpc_client 00:04:22.254 ************************************ 00:04:22.254 12:05:15 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:22.254 12:05:15 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:22.254 12:05:15 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:22.254 12:05:15 -- common/autotest_common.sh@10 -- # set +x 00:04:22.512 ************************************ 00:04:22.512 START TEST json_config 00:04:22.512 ************************************ 00:04:22.512 12:05:15 json_config -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:22.512 12:05:15 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:22.512 12:05:15 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:22.512 12:05:15 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:22.512 12:05:15 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:22.512 12:05:15 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:22.512 12:05:15 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.512 12:05:15 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.513 12:05:15 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.513 12:05:15 json_config -- paths/export.sh@5 -- # export PATH 00:04:22.513 12:05:15 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@47 -- # : 0 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:22.513 12:05:15 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:22.513 INFO: JSON configuration test init 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.513 12:05:15 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:22.513 12:05:15 json_config -- json_config/common.sh@9 -- # local app=target 00:04:22.513 12:05:15 json_config -- json_config/common.sh@10 -- # shift 00:04:22.513 12:05:15 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:22.513 12:05:15 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:22.513 12:05:15 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:22.513 12:05:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.513 12:05:15 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:22.513 12:05:15 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2753083 00:04:22.513 12:05:15 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:22.513 12:05:15 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:22.513 Waiting for target to run... 00:04:22.513 12:05:15 json_config -- json_config/common.sh@25 -- # waitforlisten 2753083 /var/tmp/spdk_tgt.sock 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@831 -- # '[' -z 2753083 ']' 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:22.513 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:22.513 12:05:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:22.513 [2024-07-26 12:05:15.629720] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:22.513 [2024-07-26 12:05:15.629821] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2753083 ] 00:04:22.513 EAL: No free 2048 kB hugepages reported on node 1 00:04:23.077 [2024-07-26 12:05:16.134513] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.077 [2024-07-26 12:05:16.237268] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.334 12:05:16 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:23.334 12:05:16 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:23.334 12:05:16 json_config -- json_config/common.sh@26 -- # echo '' 00:04:23.334 00:04:23.334 12:05:16 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:23.334 12:05:16 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:23.334 12:05:16 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:23.334 12:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.592 12:05:16 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:23.592 12:05:16 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:23.592 12:05:16 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:23.592 12:05:16 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:23.592 12:05:16 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:23.592 12:05:16 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:23.592 12:05:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:26.882 12:05:19 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.882 12:05:19 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:04:26.882 12:05:19 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:26.882 12:05:19 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@48 -- # local get_types 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@51 -- # sort 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:04:26.882 12:05:20 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:26.882 12:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@59 -- # return 0 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:04:26.882 12:05:20 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:26.882 12:05:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:26.882 12:05:20 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:26.883 12:05:20 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:04:26.883 12:05:20 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:04:26.883 12:05:20 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:26.883 12:05:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:27.141 MallocForNvmf0 00:04:27.141 12:05:20 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.141 12:05:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:27.399 MallocForNvmf1 00:04:27.399 12:05:20 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.399 12:05:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:27.656 [2024-07-26 12:05:20.805389] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:27.656 12:05:20 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.656 12:05:20 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:27.915 12:05:21 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:27.915 12:05:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:28.174 12:05:21 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:28.174 12:05:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:28.433 12:05:21 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.433 12:05:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:28.691 [2024-07-26 12:05:21.796652] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:28.691 12:05:21 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:04:28.691 12:05:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.691 12:05:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.691 12:05:21 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:04:28.691 12:05:21 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.691 12:05:21 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.691 12:05:21 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:04:28.691 12:05:21 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.691 12:05:21 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:28.948 MallocBdevForConfigChangeCheck 00:04:28.948 12:05:22 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:04:28.948 12:05:22 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:28.948 12:05:22 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:28.948 12:05:22 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:04:28.948 12:05:22 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:29.514 12:05:22 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:04:29.514 INFO: shutting down applications... 00:04:29.514 12:05:22 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:04:29.514 12:05:22 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:04:29.514 12:05:22 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:04:29.514 12:05:22 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:30.898 Calling clear_iscsi_subsystem 00:04:30.898 Calling clear_nvmf_subsystem 00:04:30.898 Calling clear_nbd_subsystem 00:04:30.898 Calling clear_ublk_subsystem 00:04:30.898 Calling clear_vhost_blk_subsystem 00:04:30.898 Calling clear_vhost_scsi_subsystem 00:04:30.898 Calling clear_bdev_subsystem 00:04:30.899 12:05:24 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:30.899 12:05:24 json_config -- json_config/json_config.sh@347 -- # count=100 00:04:30.899 12:05:24 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:04:30.899 12:05:24 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:30.899 12:05:24 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:30.899 12:05:24 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:31.466 12:05:24 json_config -- json_config/json_config.sh@349 -- # break 00:04:31.466 12:05:24 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:04:31.466 12:05:24 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:04:31.466 12:05:24 json_config -- json_config/common.sh@31 -- # local app=target 00:04:31.466 12:05:24 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:31.466 12:05:24 json_config -- json_config/common.sh@35 -- # [[ -n 2753083 ]] 00:04:31.466 12:05:24 json_config -- json_config/common.sh@38 -- # kill -SIGINT 2753083 00:04:31.466 12:05:24 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:31.466 12:05:24 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:31.466 12:05:24 json_config -- json_config/common.sh@41 -- # kill -0 2753083 00:04:31.466 12:05:24 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:32.034 12:05:25 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:32.034 12:05:25 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:32.034 12:05:25 json_config -- json_config/common.sh@41 -- # kill -0 2753083 00:04:32.034 12:05:25 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:32.034 12:05:25 json_config -- json_config/common.sh@43 -- # break 00:04:32.034 12:05:25 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:32.034 12:05:25 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:32.034 SPDK target shutdown done 00:04:32.034 12:05:25 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:04:32.034 INFO: relaunching applications... 00:04:32.034 12:05:25 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.034 12:05:25 json_config -- json_config/common.sh@9 -- # local app=target 00:04:32.034 12:05:25 json_config -- json_config/common.sh@10 -- # shift 00:04:32.034 12:05:25 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:32.034 12:05:25 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:32.034 12:05:25 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:32.034 12:05:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.035 12:05:25 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:32.035 12:05:25 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=2754280 00:04:32.035 12:05:25 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:32.035 12:05:25 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:32.035 Waiting for target to run... 00:04:32.035 12:05:25 json_config -- json_config/common.sh@25 -- # waitforlisten 2754280 /var/tmp/spdk_tgt.sock 00:04:32.035 12:05:25 json_config -- common/autotest_common.sh@831 -- # '[' -z 2754280 ']' 00:04:32.035 12:05:25 json_config -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:32.035 12:05:25 json_config -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:32.035 12:05:25 json_config -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:32.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:32.035 12:05:25 json_config -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:32.035 12:05:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:32.035 [2024-07-26 12:05:25.065452] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:32.035 [2024-07-26 12:05:25.065548] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2754280 ] 00:04:32.035 EAL: No free 2048 kB hugepages reported on node 1 00:04:32.601 [2024-07-26 12:05:25.592034] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:32.601 [2024-07-26 12:05:25.696686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:35.895 [2024-07-26 12:05:28.738812] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.895 [2024-07-26 12:05:28.771280] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.460 12:05:29 json_config -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:36.460 12:05:29 json_config -- common/autotest_common.sh@864 -- # return 0 00:04:36.460 12:05:29 json_config -- json_config/common.sh@26 -- # echo '' 00:04:36.460 00:04:36.460 12:05:29 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:04:36.460 12:05:29 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:36.460 INFO: Checking if target configuration is the same... 00:04:36.460 12:05:29 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.460 12:05:29 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:04:36.460 12:05:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.460 + '[' 2 -ne 2 ']' 00:04:36.460 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.460 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.460 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.460 +++ basename /dev/fd/62 00:04:36.460 ++ mktemp /tmp/62.XXX 00:04:36.460 + tmp_file_1=/tmp/62.hC9 00:04:36.460 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.460 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.460 + tmp_file_2=/tmp/spdk_tgt_config.json.69u 00:04:36.460 + ret=0 00:04:36.460 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.718 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:36.718 + diff -u /tmp/62.hC9 /tmp/spdk_tgt_config.json.69u 00:04:36.718 + echo 'INFO: JSON config files are the same' 00:04:36.718 INFO: JSON config files are the same 00:04:36.718 + rm /tmp/62.hC9 /tmp/spdk_tgt_config.json.69u 00:04:36.718 + exit 0 00:04:36.718 12:05:29 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:04:36.718 12:05:29 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:36.718 INFO: changing configuration and checking if this can be detected... 00:04:36.718 12:05:29 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.718 12:05:29 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:36.975 12:05:30 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.975 12:05:30 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:04:36.976 12:05:30 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.976 + '[' 2 -ne 2 ']' 00:04:36.976 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:36.976 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:36.976 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:36.976 +++ basename /dev/fd/62 00:04:36.976 ++ mktemp /tmp/62.XXX 00:04:36.976 + tmp_file_1=/tmp/62.rAe 00:04:36.976 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:36.976 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:36.976 + tmp_file_2=/tmp/spdk_tgt_config.json.Op5 00:04:36.976 + ret=0 00:04:36.976 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:37.542 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:37.542 + diff -u /tmp/62.rAe /tmp/spdk_tgt_config.json.Op5 00:04:37.542 + ret=1 00:04:37.542 + echo '=== Start of file: /tmp/62.rAe ===' 00:04:37.542 + cat /tmp/62.rAe 00:04:37.542 + echo '=== End of file: /tmp/62.rAe ===' 00:04:37.542 + echo '' 00:04:37.542 + echo '=== Start of file: /tmp/spdk_tgt_config.json.Op5 ===' 00:04:37.542 + cat /tmp/spdk_tgt_config.json.Op5 00:04:37.542 + echo '=== End of file: /tmp/spdk_tgt_config.json.Op5 ===' 00:04:37.542 + echo '' 00:04:37.542 + rm /tmp/62.rAe /tmp/spdk_tgt_config.json.Op5 00:04:37.542 + exit 1 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:04:37.542 INFO: configuration change detected. 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@321 -- # [[ -n 2754280 ]] 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@197 -- # uname -s 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:37.542 12:05:30 json_config -- json_config/json_config.sh@327 -- # killprocess 2754280 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@950 -- # '[' -z 2754280 ']' 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@954 -- # kill -0 2754280 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@955 -- # uname 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2754280 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2754280' 00:04:37.542 killing process with pid 2754280 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@969 -- # kill 2754280 00:04:37.542 12:05:30 json_config -- common/autotest_common.sh@974 -- # wait 2754280 00:04:39.449 12:05:32 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:39.449 12:05:32 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:04:39.449 12:05:32 json_config -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:39.449 12:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.449 12:05:32 json_config -- json_config/json_config.sh@332 -- # return 0 00:04:39.449 12:05:32 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:04:39.449 INFO: Success 00:04:39.449 00:04:39.449 real 0m16.825s 00:04:39.449 user 0m18.728s 00:04:39.449 sys 0m2.204s 00:04:39.449 12:05:32 json_config -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:39.449 12:05:32 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:39.449 ************************************ 00:04:39.449 END TEST json_config 00:04:39.449 ************************************ 00:04:39.449 12:05:32 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.449 12:05:32 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:39.449 12:05:32 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:39.449 12:05:32 -- common/autotest_common.sh@10 -- # set +x 00:04:39.449 ************************************ 00:04:39.449 START TEST json_config_extra_key 00:04:39.449 ************************************ 00:04:39.449 12:05:32 json_config_extra_key -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:39.449 12:05:32 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:39.449 12:05:32 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:39.449 12:05:32 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:39.449 12:05:32 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.449 12:05:32 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.449 12:05:32 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.449 12:05:32 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:39.449 12:05:32 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:39.449 12:05:32 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:39.449 INFO: launching applications... 00:04:39.449 12:05:32 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=2755315 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:39.449 Waiting for target to run... 00:04:39.449 12:05:32 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 2755315 /var/tmp/spdk_tgt.sock 00:04:39.449 12:05:32 json_config_extra_key -- common/autotest_common.sh@831 -- # '[' -z 2755315 ']' 00:04:39.450 12:05:32 json_config_extra_key -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:39.450 12:05:32 json_config_extra_key -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:39.450 12:05:32 json_config_extra_key -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:39.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:39.450 12:05:32 json_config_extra_key -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:39.450 12:05:32 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.450 [2024-07-26 12:05:32.495976] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:39.450 [2024-07-26 12:05:32.496095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755315 ] 00:04:39.450 EAL: No free 2048 kB hugepages reported on node 1 00:04:39.709 [2024-07-26 12:05:32.844555] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:39.709 [2024-07-26 12:05:32.933455] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.279 12:05:33 json_config_extra_key -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:40.279 12:05:33 json_config_extra_key -- common/autotest_common.sh@864 -- # return 0 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:40.279 00:04:40.279 12:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:40.279 INFO: shutting down applications... 00:04:40.279 12:05:33 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 2755315 ]] 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 2755315 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2755315 00:04:40.279 12:05:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.847 12:05:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.847 12:05:33 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.847 12:05:33 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2755315 00:04:40.847 12:05:33 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 2755315 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:41.416 12:05:34 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:41.416 SPDK target shutdown done 00:04:41.416 12:05:34 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:41.416 Success 00:04:41.416 00:04:41.416 real 0m2.041s 00:04:41.416 user 0m1.542s 00:04:41.416 sys 0m0.442s 00:04:41.416 12:05:34 json_config_extra_key -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:41.416 12:05:34 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:41.416 ************************************ 00:04:41.416 END TEST json_config_extra_key 00:04:41.416 ************************************ 00:04:41.416 12:05:34 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.416 12:05:34 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:41.416 12:05:34 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:41.416 12:05:34 -- common/autotest_common.sh@10 -- # set +x 00:04:41.416 ************************************ 00:04:41.416 START TEST alias_rpc 00:04:41.416 ************************************ 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:41.416 * Looking for test storage... 00:04:41.416 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:41.416 12:05:34 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:41.416 12:05:34 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=2755627 00:04:41.416 12:05:34 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:41.416 12:05:34 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 2755627 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@831 -- # '[' -z 2755627 ']' 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:41.416 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:41.416 12:05:34 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.416 [2024-07-26 12:05:34.595030] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:41.416 [2024-07-26 12:05:34.595146] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755627 ] 00:04:41.416 EAL: No free 2048 kB hugepages reported on node 1 00:04:41.416 [2024-07-26 12:05:34.661082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:41.674 [2024-07-26 12:05:34.779430] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.935 12:05:35 alias_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:41.935 12:05:35 alias_rpc -- common/autotest_common.sh@864 -- # return 0 00:04:41.935 12:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:42.195 12:05:35 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 2755627 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@950 -- # '[' -z 2755627 ']' 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@954 -- # kill -0 2755627 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@955 -- # uname 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2755627 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2755627' 00:04:42.195 killing process with pid 2755627 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@969 -- # kill 2755627 00:04:42.195 12:05:35 alias_rpc -- common/autotest_common.sh@974 -- # wait 2755627 00:04:42.763 00:04:42.763 real 0m1.306s 00:04:42.763 user 0m1.403s 00:04:42.763 sys 0m0.443s 00:04:42.763 12:05:35 alias_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:42.763 12:05:35 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.763 ************************************ 00:04:42.763 END TEST alias_rpc 00:04:42.763 ************************************ 00:04:42.763 12:05:35 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:04:42.763 12:05:35 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.763 12:05:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:42.763 12:05:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:42.763 12:05:35 -- common/autotest_common.sh@10 -- # set +x 00:04:42.763 ************************************ 00:04:42.763 START TEST spdkcli_tcp 00:04:42.763 ************************************ 00:04:42.763 12:05:35 spdkcli_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:42.763 * Looking for test storage... 00:04:42.763 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:42.763 12:05:35 spdkcli_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:04:42.763 12:05:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=2755822 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:42.763 12:05:35 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 2755822 00:04:42.764 12:05:35 spdkcli_tcp -- common/autotest_common.sh@831 -- # '[' -z 2755822 ']' 00:04:42.764 12:05:35 spdkcli_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:42.764 12:05:35 spdkcli_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:42.764 12:05:35 spdkcli_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:42.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:42.764 12:05:35 spdkcli_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:42.764 12:05:35 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:42.764 [2024-07-26 12:05:35.944261] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:42.764 [2024-07-26 12:05:35.944377] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2755822 ] 00:04:42.764 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.764 [2024-07-26 12:05:36.000731] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.023 [2024-07-26 12:05:36.112447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.023 [2024-07-26 12:05:36.112451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.281 12:05:36 spdkcli_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:43.281 12:05:36 spdkcli_tcp -- common/autotest_common.sh@864 -- # return 0 00:04:43.281 12:05:36 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=2755834 00:04:43.281 12:05:36 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.281 12:05:36 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:43.541 [ 00:04:43.541 "bdev_malloc_delete", 00:04:43.541 "bdev_malloc_create", 00:04:43.541 "bdev_null_resize", 00:04:43.541 "bdev_null_delete", 00:04:43.541 "bdev_null_create", 00:04:43.541 "bdev_nvme_cuse_unregister", 00:04:43.541 "bdev_nvme_cuse_register", 00:04:43.541 "bdev_opal_new_user", 00:04:43.541 "bdev_opal_set_lock_state", 00:04:43.541 "bdev_opal_delete", 00:04:43.541 "bdev_opal_get_info", 00:04:43.541 "bdev_opal_create", 00:04:43.541 "bdev_nvme_opal_revert", 00:04:43.541 "bdev_nvme_opal_init", 00:04:43.541 "bdev_nvme_send_cmd", 00:04:43.541 "bdev_nvme_get_path_iostat", 00:04:43.541 "bdev_nvme_get_mdns_discovery_info", 00:04:43.541 "bdev_nvme_stop_mdns_discovery", 00:04:43.541 "bdev_nvme_start_mdns_discovery", 00:04:43.541 "bdev_nvme_set_multipath_policy", 00:04:43.541 "bdev_nvme_set_preferred_path", 00:04:43.541 "bdev_nvme_get_io_paths", 00:04:43.541 "bdev_nvme_remove_error_injection", 00:04:43.541 "bdev_nvme_add_error_injection", 00:04:43.541 "bdev_nvme_get_discovery_info", 00:04:43.541 "bdev_nvme_stop_discovery", 00:04:43.541 "bdev_nvme_start_discovery", 00:04:43.541 "bdev_nvme_get_controller_health_info", 00:04:43.541 "bdev_nvme_disable_controller", 00:04:43.541 "bdev_nvme_enable_controller", 00:04:43.541 "bdev_nvme_reset_controller", 00:04:43.541 "bdev_nvme_get_transport_statistics", 00:04:43.541 "bdev_nvme_apply_firmware", 00:04:43.541 "bdev_nvme_detach_controller", 00:04:43.541 "bdev_nvme_get_controllers", 00:04:43.541 "bdev_nvme_attach_controller", 00:04:43.541 "bdev_nvme_set_hotplug", 00:04:43.541 "bdev_nvme_set_options", 00:04:43.541 "bdev_passthru_delete", 00:04:43.541 "bdev_passthru_create", 00:04:43.541 "bdev_lvol_set_parent_bdev", 00:04:43.541 "bdev_lvol_set_parent", 00:04:43.541 "bdev_lvol_check_shallow_copy", 00:04:43.541 "bdev_lvol_start_shallow_copy", 00:04:43.541 "bdev_lvol_grow_lvstore", 00:04:43.541 "bdev_lvol_get_lvols", 00:04:43.541 "bdev_lvol_get_lvstores", 00:04:43.541 "bdev_lvol_delete", 00:04:43.541 "bdev_lvol_set_read_only", 00:04:43.541 "bdev_lvol_resize", 00:04:43.541 "bdev_lvol_decouple_parent", 00:04:43.541 "bdev_lvol_inflate", 00:04:43.541 "bdev_lvol_rename", 00:04:43.541 "bdev_lvol_clone_bdev", 00:04:43.541 "bdev_lvol_clone", 00:04:43.541 "bdev_lvol_snapshot", 00:04:43.541 "bdev_lvol_create", 00:04:43.541 "bdev_lvol_delete_lvstore", 00:04:43.541 "bdev_lvol_rename_lvstore", 00:04:43.541 "bdev_lvol_create_lvstore", 00:04:43.541 "bdev_raid_set_options", 00:04:43.541 "bdev_raid_remove_base_bdev", 00:04:43.541 "bdev_raid_add_base_bdev", 00:04:43.541 "bdev_raid_delete", 00:04:43.541 "bdev_raid_create", 00:04:43.541 "bdev_raid_get_bdevs", 00:04:43.541 "bdev_error_inject_error", 00:04:43.541 "bdev_error_delete", 00:04:43.541 "bdev_error_create", 00:04:43.541 "bdev_split_delete", 00:04:43.541 "bdev_split_create", 00:04:43.541 "bdev_delay_delete", 00:04:43.541 "bdev_delay_create", 00:04:43.541 "bdev_delay_update_latency", 00:04:43.541 "bdev_zone_block_delete", 00:04:43.541 "bdev_zone_block_create", 00:04:43.541 "blobfs_create", 00:04:43.541 "blobfs_detect", 00:04:43.541 "blobfs_set_cache_size", 00:04:43.541 "bdev_aio_delete", 00:04:43.541 "bdev_aio_rescan", 00:04:43.541 "bdev_aio_create", 00:04:43.541 "bdev_ftl_set_property", 00:04:43.541 "bdev_ftl_get_properties", 00:04:43.541 "bdev_ftl_get_stats", 00:04:43.541 "bdev_ftl_unmap", 00:04:43.541 "bdev_ftl_unload", 00:04:43.541 "bdev_ftl_delete", 00:04:43.541 "bdev_ftl_load", 00:04:43.541 "bdev_ftl_create", 00:04:43.541 "bdev_virtio_attach_controller", 00:04:43.541 "bdev_virtio_scsi_get_devices", 00:04:43.541 "bdev_virtio_detach_controller", 00:04:43.541 "bdev_virtio_blk_set_hotplug", 00:04:43.541 "bdev_iscsi_delete", 00:04:43.541 "bdev_iscsi_create", 00:04:43.541 "bdev_iscsi_set_options", 00:04:43.541 "accel_error_inject_error", 00:04:43.541 "ioat_scan_accel_module", 00:04:43.541 "dsa_scan_accel_module", 00:04:43.541 "iaa_scan_accel_module", 00:04:43.541 "vfu_virtio_create_scsi_endpoint", 00:04:43.541 "vfu_virtio_scsi_remove_target", 00:04:43.541 "vfu_virtio_scsi_add_target", 00:04:43.541 "vfu_virtio_create_blk_endpoint", 00:04:43.541 "vfu_virtio_delete_endpoint", 00:04:43.541 "keyring_file_remove_key", 00:04:43.541 "keyring_file_add_key", 00:04:43.541 "keyring_linux_set_options", 00:04:43.541 "iscsi_get_histogram", 00:04:43.541 "iscsi_enable_histogram", 00:04:43.541 "iscsi_set_options", 00:04:43.541 "iscsi_get_auth_groups", 00:04:43.541 "iscsi_auth_group_remove_secret", 00:04:43.541 "iscsi_auth_group_add_secret", 00:04:43.541 "iscsi_delete_auth_group", 00:04:43.541 "iscsi_create_auth_group", 00:04:43.541 "iscsi_set_discovery_auth", 00:04:43.541 "iscsi_get_options", 00:04:43.541 "iscsi_target_node_request_logout", 00:04:43.541 "iscsi_target_node_set_redirect", 00:04:43.541 "iscsi_target_node_set_auth", 00:04:43.541 "iscsi_target_node_add_lun", 00:04:43.541 "iscsi_get_stats", 00:04:43.541 "iscsi_get_connections", 00:04:43.541 "iscsi_portal_group_set_auth", 00:04:43.541 "iscsi_start_portal_group", 00:04:43.541 "iscsi_delete_portal_group", 00:04:43.541 "iscsi_create_portal_group", 00:04:43.541 "iscsi_get_portal_groups", 00:04:43.542 "iscsi_delete_target_node", 00:04:43.542 "iscsi_target_node_remove_pg_ig_maps", 00:04:43.542 "iscsi_target_node_add_pg_ig_maps", 00:04:43.542 "iscsi_create_target_node", 00:04:43.542 "iscsi_get_target_nodes", 00:04:43.542 "iscsi_delete_initiator_group", 00:04:43.542 "iscsi_initiator_group_remove_initiators", 00:04:43.542 "iscsi_initiator_group_add_initiators", 00:04:43.542 "iscsi_create_initiator_group", 00:04:43.542 "iscsi_get_initiator_groups", 00:04:43.542 "nvmf_set_crdt", 00:04:43.542 "nvmf_set_config", 00:04:43.542 "nvmf_set_max_subsystems", 00:04:43.542 "nvmf_stop_mdns_prr", 00:04:43.542 "nvmf_publish_mdns_prr", 00:04:43.542 "nvmf_subsystem_get_listeners", 00:04:43.542 "nvmf_subsystem_get_qpairs", 00:04:43.542 "nvmf_subsystem_get_controllers", 00:04:43.542 "nvmf_get_stats", 00:04:43.542 "nvmf_get_transports", 00:04:43.542 "nvmf_create_transport", 00:04:43.542 "nvmf_get_targets", 00:04:43.542 "nvmf_delete_target", 00:04:43.542 "nvmf_create_target", 00:04:43.542 "nvmf_subsystem_allow_any_host", 00:04:43.542 "nvmf_subsystem_remove_host", 00:04:43.542 "nvmf_subsystem_add_host", 00:04:43.542 "nvmf_ns_remove_host", 00:04:43.542 "nvmf_ns_add_host", 00:04:43.542 "nvmf_subsystem_remove_ns", 00:04:43.542 "nvmf_subsystem_add_ns", 00:04:43.542 "nvmf_subsystem_listener_set_ana_state", 00:04:43.542 "nvmf_discovery_get_referrals", 00:04:43.542 "nvmf_discovery_remove_referral", 00:04:43.542 "nvmf_discovery_add_referral", 00:04:43.542 "nvmf_subsystem_remove_listener", 00:04:43.542 "nvmf_subsystem_add_listener", 00:04:43.542 "nvmf_delete_subsystem", 00:04:43.542 "nvmf_create_subsystem", 00:04:43.542 "nvmf_get_subsystems", 00:04:43.542 "env_dpdk_get_mem_stats", 00:04:43.542 "nbd_get_disks", 00:04:43.542 "nbd_stop_disk", 00:04:43.542 "nbd_start_disk", 00:04:43.542 "ublk_recover_disk", 00:04:43.542 "ublk_get_disks", 00:04:43.542 "ublk_stop_disk", 00:04:43.542 "ublk_start_disk", 00:04:43.542 "ublk_destroy_target", 00:04:43.542 "ublk_create_target", 00:04:43.542 "virtio_blk_create_transport", 00:04:43.542 "virtio_blk_get_transports", 00:04:43.542 "vhost_controller_set_coalescing", 00:04:43.542 "vhost_get_controllers", 00:04:43.542 "vhost_delete_controller", 00:04:43.542 "vhost_create_blk_controller", 00:04:43.542 "vhost_scsi_controller_remove_target", 00:04:43.542 "vhost_scsi_controller_add_target", 00:04:43.542 "vhost_start_scsi_controller", 00:04:43.542 "vhost_create_scsi_controller", 00:04:43.542 "thread_set_cpumask", 00:04:43.542 "framework_get_governor", 00:04:43.542 "framework_get_scheduler", 00:04:43.542 "framework_set_scheduler", 00:04:43.542 "framework_get_reactors", 00:04:43.542 "thread_get_io_channels", 00:04:43.542 "thread_get_pollers", 00:04:43.542 "thread_get_stats", 00:04:43.542 "framework_monitor_context_switch", 00:04:43.542 "spdk_kill_instance", 00:04:43.542 "log_enable_timestamps", 00:04:43.542 "log_get_flags", 00:04:43.542 "log_clear_flag", 00:04:43.542 "log_set_flag", 00:04:43.542 "log_get_level", 00:04:43.542 "log_set_level", 00:04:43.542 "log_get_print_level", 00:04:43.542 "log_set_print_level", 00:04:43.542 "framework_enable_cpumask_locks", 00:04:43.542 "framework_disable_cpumask_locks", 00:04:43.542 "framework_wait_init", 00:04:43.542 "framework_start_init", 00:04:43.542 "scsi_get_devices", 00:04:43.542 "bdev_get_histogram", 00:04:43.542 "bdev_enable_histogram", 00:04:43.542 "bdev_set_qos_limit", 00:04:43.542 "bdev_set_qd_sampling_period", 00:04:43.542 "bdev_get_bdevs", 00:04:43.542 "bdev_reset_iostat", 00:04:43.542 "bdev_get_iostat", 00:04:43.542 "bdev_examine", 00:04:43.542 "bdev_wait_for_examine", 00:04:43.542 "bdev_set_options", 00:04:43.542 "notify_get_notifications", 00:04:43.542 "notify_get_types", 00:04:43.542 "accel_get_stats", 00:04:43.542 "accel_set_options", 00:04:43.542 "accel_set_driver", 00:04:43.542 "accel_crypto_key_destroy", 00:04:43.542 "accel_crypto_keys_get", 00:04:43.542 "accel_crypto_key_create", 00:04:43.542 "accel_assign_opc", 00:04:43.542 "accel_get_module_info", 00:04:43.542 "accel_get_opc_assignments", 00:04:43.542 "vmd_rescan", 00:04:43.542 "vmd_remove_device", 00:04:43.542 "vmd_enable", 00:04:43.542 "sock_get_default_impl", 00:04:43.542 "sock_set_default_impl", 00:04:43.542 "sock_impl_set_options", 00:04:43.542 "sock_impl_get_options", 00:04:43.542 "iobuf_get_stats", 00:04:43.542 "iobuf_set_options", 00:04:43.542 "keyring_get_keys", 00:04:43.542 "framework_get_pci_devices", 00:04:43.542 "framework_get_config", 00:04:43.542 "framework_get_subsystems", 00:04:43.542 "vfu_tgt_set_base_path", 00:04:43.542 "trace_get_info", 00:04:43.542 "trace_get_tpoint_group_mask", 00:04:43.542 "trace_disable_tpoint_group", 00:04:43.542 "trace_enable_tpoint_group", 00:04:43.542 "trace_clear_tpoint_mask", 00:04:43.542 "trace_set_tpoint_mask", 00:04:43.542 "spdk_get_version", 00:04:43.542 "rpc_get_methods" 00:04:43.542 ] 00:04:43.542 12:05:36 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.542 12:05:36 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:43.542 12:05:36 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 2755822 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@950 -- # '[' -z 2755822 ']' 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@954 -- # kill -0 2755822 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # uname 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2755822 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2755822' 00:04:43.542 killing process with pid 2755822 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@969 -- # kill 2755822 00:04:43.542 12:05:36 spdkcli_tcp -- common/autotest_common.sh@974 -- # wait 2755822 00:04:44.110 00:04:44.110 real 0m1.305s 00:04:44.110 user 0m2.252s 00:04:44.110 sys 0m0.467s 00:04:44.110 12:05:37 spdkcli_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:44.110 12:05:37 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 ************************************ 00:04:44.110 END TEST spdkcli_tcp 00:04:44.110 ************************************ 00:04:44.110 12:05:37 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.110 12:05:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:44.110 12:05:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:44.110 12:05:37 -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 ************************************ 00:04:44.110 START TEST dpdk_mem_utility 00:04:44.110 ************************************ 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:44.110 * Looking for test storage... 00:04:44.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:44.110 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.110 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=2756029 00:04:44.110 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:44.110 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 2756029 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@831 -- # '[' -z 2756029 ']' 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:44.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:44.110 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.110 [2024-07-26 12:05:37.290716] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:44.110 [2024-07-26 12:05:37.290799] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756029 ] 00:04:44.110 EAL: No free 2048 kB hugepages reported on node 1 00:04:44.110 [2024-07-26 12:05:37.347289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:44.369 [2024-07-26 12:05:37.452930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:44.629 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:44.629 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@864 -- # return 0 00:04:44.629 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:44.629 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:44.629 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:44.629 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:44.629 { 00:04:44.629 "filename": "/tmp/spdk_mem_dump.txt" 00:04:44.629 } 00:04:44.629 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:44.629 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:44.629 DPDK memory size 814.000000 MiB in 1 heap(s) 00:04:44.629 1 heaps totaling size 814.000000 MiB 00:04:44.629 size: 814.000000 MiB heap id: 0 00:04:44.629 end heaps---------- 00:04:44.629 8 mempools totaling size 598.116089 MiB 00:04:44.629 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:44.629 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:44.629 size: 84.521057 MiB name: bdev_io_2756029 00:04:44.629 size: 51.011292 MiB name: evtpool_2756029 00:04:44.629 size: 50.003479 MiB name: msgpool_2756029 00:04:44.629 size: 21.763794 MiB name: PDU_Pool 00:04:44.629 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:44.629 size: 0.026123 MiB name: Session_Pool 00:04:44.629 end mempools------- 00:04:44.629 6 memzones totaling size 4.142822 MiB 00:04:44.629 size: 1.000366 MiB name: RG_ring_0_2756029 00:04:44.629 size: 1.000366 MiB name: RG_ring_1_2756029 00:04:44.629 size: 1.000366 MiB name: RG_ring_4_2756029 00:04:44.629 size: 1.000366 MiB name: RG_ring_5_2756029 00:04:44.629 size: 0.125366 MiB name: RG_ring_2_2756029 00:04:44.629 size: 0.015991 MiB name: RG_ring_3_2756029 00:04:44.629 end memzones------- 00:04:44.629 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:44.629 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:04:44.629 list of free elements. size: 12.519348 MiB 00:04:44.629 element at address: 0x200000400000 with size: 1.999512 MiB 00:04:44.629 element at address: 0x200018e00000 with size: 0.999878 MiB 00:04:44.629 element at address: 0x200019000000 with size: 0.999878 MiB 00:04:44.629 element at address: 0x200003e00000 with size: 0.996277 MiB 00:04:44.629 element at address: 0x200031c00000 with size: 0.994446 MiB 00:04:44.629 element at address: 0x200013800000 with size: 0.978699 MiB 00:04:44.629 element at address: 0x200007000000 with size: 0.959839 MiB 00:04:44.629 element at address: 0x200019200000 with size: 0.936584 MiB 00:04:44.629 element at address: 0x200000200000 with size: 0.841614 MiB 00:04:44.629 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:04:44.629 element at address: 0x20000b200000 with size: 0.490723 MiB 00:04:44.629 element at address: 0x200000800000 with size: 0.487793 MiB 00:04:44.629 element at address: 0x200019400000 with size: 0.485657 MiB 00:04:44.629 element at address: 0x200027e00000 with size: 0.410034 MiB 00:04:44.629 element at address: 0x200003a00000 with size: 0.355530 MiB 00:04:44.629 list of standard malloc elements. size: 199.218079 MiB 00:04:44.629 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:04:44.629 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:04:44.629 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:44.629 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:04:44.629 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:04:44.629 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:44.629 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:04:44.629 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:44.629 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:04:44.629 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003adb300 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003adb500 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003affa80 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003affb40 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:04:44.629 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:04:44.629 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200027e69040 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:04:44.629 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:04:44.629 list of memzone associated elements. size: 602.262573 MiB 00:04:44.629 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:04:44.629 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:44.629 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:04:44.629 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:44.629 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:04:44.629 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_2756029_0 00:04:44.629 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:04:44.629 associated memzone info: size: 48.002930 MiB name: MP_evtpool_2756029_0 00:04:44.629 element at address: 0x200003fff380 with size: 48.003052 MiB 00:04:44.630 associated memzone info: size: 48.002930 MiB name: MP_msgpool_2756029_0 00:04:44.630 element at address: 0x2000195be940 with size: 20.255554 MiB 00:04:44.630 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:44.630 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:04:44.630 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:44.630 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:04:44.630 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_2756029 00:04:44.630 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:04:44.630 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_2756029 00:04:44.630 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:44.630 associated memzone info: size: 1.007996 MiB name: MP_evtpool_2756029 00:04:44.630 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:04:44.630 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:44.630 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:04:44.630 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:44.630 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:04:44.630 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:44.630 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:04:44.630 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:44.630 element at address: 0x200003eff180 with size: 1.000488 MiB 00:04:44.630 associated memzone info: size: 1.000366 MiB name: RG_ring_0_2756029 00:04:44.630 element at address: 0x200003affc00 with size: 1.000488 MiB 00:04:44.630 associated memzone info: size: 1.000366 MiB name: RG_ring_1_2756029 00:04:44.630 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:04:44.630 associated memzone info: size: 1.000366 MiB name: RG_ring_4_2756029 00:04:44.630 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:04:44.630 associated memzone info: size: 1.000366 MiB name: RG_ring_5_2756029 00:04:44.630 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:04:44.630 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_2756029 00:04:44.630 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:04:44.630 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:44.630 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:04:44.630 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:44.630 element at address: 0x20001947c540 with size: 0.250488 MiB 00:04:44.630 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:44.630 element at address: 0x200003adf880 with size: 0.125488 MiB 00:04:44.630 associated memzone info: size: 0.125366 MiB name: RG_ring_2_2756029 00:04:44.630 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:04:44.630 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:44.630 element at address: 0x200027e69100 with size: 0.023743 MiB 00:04:44.630 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:44.630 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:04:44.630 associated memzone info: size: 0.015991 MiB name: RG_ring_3_2756029 00:04:44.630 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:04:44.630 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:44.630 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:04:44.630 associated memzone info: size: 0.000183 MiB name: MP_msgpool_2756029 00:04:44.630 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:04:44.630 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_2756029 00:04:44.630 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:04:44.630 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:44.630 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:44.630 12:05:37 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 2756029 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@950 -- # '[' -z 2756029 ']' 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@954 -- # kill -0 2756029 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # uname 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2756029 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2756029' 00:04:44.630 killing process with pid 2756029 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@969 -- # kill 2756029 00:04:44.630 12:05:37 dpdk_mem_utility -- common/autotest_common.sh@974 -- # wait 2756029 00:04:45.197 00:04:45.197 real 0m1.117s 00:04:45.197 user 0m1.081s 00:04:45.197 sys 0m0.389s 00:04:45.197 12:05:38 dpdk_mem_utility -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:45.197 12:05:38 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.197 ************************************ 00:04:45.197 END TEST dpdk_mem_utility 00:04:45.197 ************************************ 00:04:45.197 12:05:38 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.197 12:05:38 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:45.197 12:05:38 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.197 12:05:38 -- common/autotest_common.sh@10 -- # set +x 00:04:45.197 ************************************ 00:04:45.197 START TEST event 00:04:45.197 ************************************ 00:04:45.197 12:05:38 event -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:45.197 * Looking for test storage... 00:04:45.197 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:45.197 12:05:38 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:45.197 12:05:38 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:45.197 12:05:38 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.197 12:05:38 event -- common/autotest_common.sh@1101 -- # '[' 6 -le 1 ']' 00:04:45.197 12:05:38 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:45.197 12:05:38 event -- common/autotest_common.sh@10 -- # set +x 00:04:45.197 ************************************ 00:04:45.197 START TEST event_perf 00:04:45.197 ************************************ 00:04:45.197 12:05:38 event.event_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:45.197 Running I/O for 1 seconds...[2024-07-26 12:05:38.440164] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:45.197 [2024-07-26 12:05:38.440227] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756221 ] 00:04:45.456 EAL: No free 2048 kB hugepages reported on node 1 00:04:45.456 [2024-07-26 12:05:38.501214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:45.456 [2024-07-26 12:05:38.615011] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:45.456 [2024-07-26 12:05:38.615085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:45.456 [2024-07-26 12:05:38.615139] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:45.456 [2024-07-26 12:05:38.615143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.838 Running I/O for 1 seconds... 00:04:46.838 lcore 0: 236592 00:04:46.838 lcore 1: 236591 00:04:46.838 lcore 2: 236591 00:04:46.838 lcore 3: 236591 00:04:46.838 done. 00:04:46.838 00:04:46.838 real 0m1.314s 00:04:46.838 user 0m4.222s 00:04:46.838 sys 0m0.087s 00:04:46.838 12:05:39 event.event_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:46.838 12:05:39 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:46.838 ************************************ 00:04:46.838 END TEST event_perf 00:04:46.838 ************************************ 00:04:46.838 12:05:39 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.838 12:05:39 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:46.838 12:05:39 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:46.838 12:05:39 event -- common/autotest_common.sh@10 -- # set +x 00:04:46.839 ************************************ 00:04:46.839 START TEST event_reactor 00:04:46.839 ************************************ 00:04:46.839 12:05:39 event.event_reactor -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:46.839 [2024-07-26 12:05:39.802956] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:46.839 [2024-07-26 12:05:39.803019] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756391 ] 00:04:46.839 EAL: No free 2048 kB hugepages reported on node 1 00:04:46.839 [2024-07-26 12:05:39.867798] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.839 [2024-07-26 12:05:39.984884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.218 test_start 00:04:48.218 oneshot 00:04:48.218 tick 100 00:04:48.218 tick 100 00:04:48.218 tick 250 00:04:48.218 tick 100 00:04:48.218 tick 100 00:04:48.218 tick 100 00:04:48.218 tick 250 00:04:48.218 tick 500 00:04:48.218 tick 100 00:04:48.218 tick 100 00:04:48.218 tick 250 00:04:48.218 tick 100 00:04:48.218 tick 100 00:04:48.218 test_end 00:04:48.218 00:04:48.218 real 0m1.317s 00:04:48.218 user 0m1.228s 00:04:48.218 sys 0m0.084s 00:04:48.218 12:05:41 event.event_reactor -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:48.218 12:05:41 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:48.218 ************************************ 00:04:48.218 END TEST event_reactor 00:04:48.218 ************************************ 00:04:48.218 12:05:41 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.218 12:05:41 event -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:04:48.218 12:05:41 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:48.218 12:05:41 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.218 ************************************ 00:04:48.218 START TEST event_reactor_perf 00:04:48.218 ************************************ 00:04:48.218 12:05:41 event.event_reactor_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:48.218 [2024-07-26 12:05:41.169257] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:48.218 [2024-07-26 12:05:41.169318] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756649 ] 00:04:48.218 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.218 [2024-07-26 12:05:41.233150] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.218 [2024-07-26 12:05:41.349102] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.597 test_start 00:04:49.597 test_end 00:04:49.597 Performance: 356833 events per second 00:04:49.597 00:04:49.597 real 0m1.318s 00:04:49.597 user 0m1.233s 00:04:49.597 sys 0m0.080s 00:04:49.597 12:05:42 event.event_reactor_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:49.597 12:05:42 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.597 ************************************ 00:04:49.597 END TEST event_reactor_perf 00:04:49.597 ************************************ 00:04:49.597 12:05:42 event -- event/event.sh@49 -- # uname -s 00:04:49.597 12:05:42 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:49.597 12:05:42 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.597 12:05:42 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.597 12:05:42 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.597 12:05:42 event -- common/autotest_common.sh@10 -- # set +x 00:04:49.597 ************************************ 00:04:49.597 START TEST event_scheduler 00:04:49.597 ************************************ 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:49.597 * Looking for test storage... 00:04:49.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=2756829 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 2756829 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@831 -- # '[' -z 2756829 ']' 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.597 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.597 [2024-07-26 12:05:42.622987] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:49.597 [2024-07-26 12:05:42.623082] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2756829 ] 00:04:49.597 EAL: No free 2048 kB hugepages reported on node 1 00:04:49.597 [2024-07-26 12:05:42.680596] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:49.597 [2024-07-26 12:05:42.794529] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.597 [2024-07-26 12:05:42.794596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:49.597 [2024-07-26 12:05:42.794664] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:04:49.597 [2024-07-26 12:05:42.794661] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@864 -- # return 0 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.597 [2024-07-26 12:05:42.843484] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:49.597 [2024-07-26 12:05:42.843510] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:04:49.597 [2024-07-26 12:05:42.843526] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:49.597 [2024-07-26 12:05:42.843538] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:49.597 [2024-07-26 12:05:42.843548] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.597 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.597 12:05:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 [2024-07-26 12:05:42.940022] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:49.856 12:05:42 event.event_scheduler -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:42 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:49.856 12:05:42 event.event_scheduler -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:49.856 12:05:42 event.event_scheduler -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:49.856 12:05:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 ************************************ 00:04:49.856 START TEST scheduler_create_thread 00:04:49.856 ************************************ 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1125 -- # scheduler_create_thread 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 2 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 3 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 4 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:42 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 5 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 6 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 7 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 8 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 9 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 10 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@561 -- # xtrace_disable 00:04:49.856 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.422 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:04:50.422 00:04:50.422 real 0m0.592s 00:04:50.422 user 0m0.011s 00:04:50.422 sys 0m0.003s 00:04:50.422 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:50.422 12:05:43 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:50.422 ************************************ 00:04:50.422 END TEST scheduler_create_thread 00:04:50.422 ************************************ 00:04:50.423 12:05:43 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:50.423 12:05:43 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 2756829 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@950 -- # '[' -z 2756829 ']' 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@954 -- # kill -0 2756829 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@955 -- # uname 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2756829 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2756829' 00:04:50.423 killing process with pid 2756829 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@969 -- # kill 2756829 00:04:50.423 12:05:43 event.event_scheduler -- common/autotest_common.sh@974 -- # wait 2756829 00:04:50.988 [2024-07-26 12:05:44.040135] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:51.247 00:04:51.247 real 0m1.776s 00:04:51.247 user 0m2.271s 00:04:51.247 sys 0m0.321s 00:04:51.247 12:05:44 event.event_scheduler -- common/autotest_common.sh@1126 -- # xtrace_disable 00:04:51.247 12:05:44 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:51.247 ************************************ 00:04:51.247 END TEST event_scheduler 00:04:51.247 ************************************ 00:04:51.247 12:05:44 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:51.247 12:05:44 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:51.247 12:05:44 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:04:51.247 12:05:44 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:04:51.247 12:05:44 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.247 ************************************ 00:04:51.247 START TEST app_repeat 00:04:51.247 ************************************ 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@1125 -- # app_repeat_test 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@19 -- # repeat_pid=2757056 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 2757056' 00:04:51.247 Process app_repeat pid: 2757056 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:51.247 spdk_app_start Round 0 00:04:51.247 12:05:44 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2757056 /var/tmp/spdk-nbd.sock 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2757056 ']' 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:51.247 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:51.247 12:05:44 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:51.247 [2024-07-26 12:05:44.381754] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:04:51.247 [2024-07-26 12:05:44.381819] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2757056 ] 00:04:51.247 EAL: No free 2048 kB hugepages reported on node 1 00:04:51.247 [2024-07-26 12:05:44.448324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:51.504 [2024-07-26 12:05:44.564723] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:51.504 [2024-07-26 12:05:44.564728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.504 12:05:44 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:51.504 12:05:44 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:51.504 12:05:44 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:51.762 Malloc0 00:04:51.762 12:05:44 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:52.019 Malloc1 00:04:52.019 12:05:45 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.019 12:05:45 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:52.020 12:05:45 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:52.020 12:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:52.020 12:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.020 12:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:52.278 /dev/nbd0 00:04:52.278 12:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:52.278 12:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.278 1+0 records in 00:04:52.278 1+0 records out 00:04:52.278 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000196503 s, 20.8 MB/s 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:52.278 12:05:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:52.278 12:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.278 12:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.278 12:05:45 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:52.569 /dev/nbd1 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:52.569 1+0 records in 00:04:52.569 1+0 records out 00:04:52.569 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000192874 s, 21.2 MB/s 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:52.569 12:05:45 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:52.569 12:05:45 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:52.827 { 00:04:52.827 "nbd_device": "/dev/nbd0", 00:04:52.827 "bdev_name": "Malloc0" 00:04:52.827 }, 00:04:52.827 { 00:04:52.827 "nbd_device": "/dev/nbd1", 00:04:52.827 "bdev_name": "Malloc1" 00:04:52.827 } 00:04:52.827 ]' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:52.827 { 00:04:52.827 "nbd_device": "/dev/nbd0", 00:04:52.827 "bdev_name": "Malloc0" 00:04:52.827 }, 00:04:52.827 { 00:04:52.827 "nbd_device": "/dev/nbd1", 00:04:52.827 "bdev_name": "Malloc1" 00:04:52.827 } 00:04:52.827 ]' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:52.827 /dev/nbd1' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:52.827 /dev/nbd1' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:52.827 256+0 records in 00:04:52.827 256+0 records out 00:04:52.827 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00406164 s, 258 MB/s 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:52.827 12:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:53.085 256+0 records in 00:04:53.085 256+0 records out 00:04:53.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0276505 s, 37.9 MB/s 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:53.085 256+0 records in 00:04:53.085 256+0 records out 00:04:53.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0262015 s, 40.0 MB/s 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.085 12:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:53.342 12:05:46 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:53.599 12:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:53.856 12:05:46 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:53.856 12:05:46 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:54.114 12:05:47 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:04:54.372 [2024-07-26 12:05:47.506433] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:54.372 [2024-07-26 12:05:47.621267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:54.372 [2024-07-26 12:05:47.621267] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:54.633 [2024-07-26 12:05:47.682670] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:04:54.633 [2024-07-26 12:05:47.682743] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:04:57.165 12:05:50 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:57.165 12:05:50 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:04:57.165 spdk_app_start Round 1 00:04:57.165 12:05:50 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2757056 /var/tmp/spdk-nbd.sock 00:04:57.165 12:05:50 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2757056 ']' 00:04:57.165 12:05:50 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:57.165 12:05:50 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:04:57.165 12:05:50 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:57.165 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:57.165 12:05:50 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:04:57.165 12:05:50 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:57.423 12:05:50 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:04:57.423 12:05:50 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:04:57.423 12:05:50 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.681 Malloc0 00:04:57.681 12:05:50 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.940 Malloc1 00:04:57.940 12:05:51 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.940 12:05:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.198 /dev/nbd0 00:04:58.198 12:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.198 12:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.198 1+0 records in 00:04:58.198 1+0 records out 00:04:58.198 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000195738 s, 20.9 MB/s 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:58.198 12:05:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:58.198 12:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.198 12:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.198 12:05:51 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.457 /dev/nbd1 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.457 1+0 records in 00:04:58.457 1+0 records out 00:04:58.457 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000219417 s, 18.7 MB/s 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:04:58.457 12:05:51 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.457 12:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.715 { 00:04:58.715 "nbd_device": "/dev/nbd0", 00:04:58.715 "bdev_name": "Malloc0" 00:04:58.715 }, 00:04:58.715 { 00:04:58.715 "nbd_device": "/dev/nbd1", 00:04:58.715 "bdev_name": "Malloc1" 00:04:58.715 } 00:04:58.715 ]' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.715 { 00:04:58.715 "nbd_device": "/dev/nbd0", 00:04:58.715 "bdev_name": "Malloc0" 00:04:58.715 }, 00:04:58.715 { 00:04:58.715 "nbd_device": "/dev/nbd1", 00:04:58.715 "bdev_name": "Malloc1" 00:04:58.715 } 00:04:58.715 ]' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.715 /dev/nbd1' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.715 /dev/nbd1' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.715 256+0 records in 00:04:58.715 256+0 records out 00:04:58.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00498817 s, 210 MB/s 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.715 256+0 records in 00:04:58.715 256+0 records out 00:04:58.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.024118 s, 43.5 MB/s 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.715 256+0 records in 00:04:58.715 256+0 records out 00:04:58.715 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0289617 s, 36.2 MB/s 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.715 12:05:51 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.282 12:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.540 12:05:52 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.540 12:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.540 12:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.540 12:05:52 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.799 12:05:52 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.799 12:05:52 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:00.058 12:05:53 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.318 [2024-07-26 12:05:53.333301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.318 [2024-07-26 12:05:53.446787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.318 [2024-07-26 12:05:53.446791] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.318 [2024-07-26 12:05:53.508800] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.318 [2024-07-26 12:05:53.508906] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.852 12:05:56 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.852 12:05:56 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:02.852 spdk_app_start Round 2 00:05:02.852 12:05:56 event.app_repeat -- event/event.sh@25 -- # waitforlisten 2757056 /var/tmp/spdk-nbd.sock 00:05:02.852 12:05:56 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2757056 ']' 00:05:02.852 12:05:56 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.852 12:05:56 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:02.852 12:05:56 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.852 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.852 12:05:56 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:02.852 12:05:56 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:03.110 12:05:56 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:03.110 12:05:56 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:03.110 12:05:56 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.368 Malloc0 00:05:03.368 12:05:56 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.626 Malloc1 00:05:03.626 12:05:56 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.626 12:05:56 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.885 /dev/nbd0 00:05:03.885 12:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.885 12:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd0 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd0 /proc/partitions 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.885 1+0 records in 00:05:03.885 1+0 records out 00:05:03.885 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000238044 s, 17.2 MB/s 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:03.885 12:05:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:03.885 12:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.885 12:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.885 12:05:57 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:04.144 /dev/nbd1 00:05:04.144 12:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:04.144 12:05:57 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@868 -- # local nbd_name=nbd1 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@869 -- # local i 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i = 1 )) 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@871 -- # (( i <= 20 )) 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@872 -- # grep -q -w nbd1 /proc/partitions 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@873 -- # break 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i = 1 )) 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@884 -- # (( i <= 20 )) 00:05:04.144 12:05:57 event.app_repeat -- common/autotest_common.sh@885 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:04.402 1+0 records in 00:05:04.402 1+0 records out 00:05:04.402 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000225314 s, 18.2 MB/s 00:05:04.402 12:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.402 12:05:57 event.app_repeat -- common/autotest_common.sh@886 -- # size=4096 00:05:04.402 12:05:57 event.app_repeat -- common/autotest_common.sh@887 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:04.403 12:05:57 event.app_repeat -- common/autotest_common.sh@888 -- # '[' 4096 '!=' 0 ']' 00:05:04.403 12:05:57 event.app_repeat -- common/autotest_common.sh@889 -- # return 0 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:04.403 { 00:05:04.403 "nbd_device": "/dev/nbd0", 00:05:04.403 "bdev_name": "Malloc0" 00:05:04.403 }, 00:05:04.403 { 00:05:04.403 "nbd_device": "/dev/nbd1", 00:05:04.403 "bdev_name": "Malloc1" 00:05:04.403 } 00:05:04.403 ]' 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:04.403 { 00:05:04.403 "nbd_device": "/dev/nbd0", 00:05:04.403 "bdev_name": "Malloc0" 00:05:04.403 }, 00:05:04.403 { 00:05:04.403 "nbd_device": "/dev/nbd1", 00:05:04.403 "bdev_name": "Malloc1" 00:05:04.403 } 00:05:04.403 ]' 00:05:04.403 12:05:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:04.661 /dev/nbd1' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:04.661 /dev/nbd1' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:04.661 256+0 records in 00:05:04.661 256+0 records out 00:05:04.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00506678 s, 207 MB/s 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:04.661 256+0 records in 00:05:04.661 256+0 records out 00:05:04.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0269738 s, 38.9 MB/s 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:04.661 256+0 records in 00:05:04.661 256+0 records out 00:05:04.661 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0263635 s, 39.8 MB/s 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.661 12:05:57 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.919 12:05:58 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:05.177 12:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:05.435 12:05:58 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:05.435 12:05:58 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:05.693 12:05:58 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.953 [2024-07-26 12:05:59.147633] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:06.211 [2024-07-26 12:05:59.262963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:06.211 [2024-07-26 12:05:59.262963] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:06.211 [2024-07-26 12:05:59.325268] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:06.211 [2024-07-26 12:05:59.325353] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.746 12:06:01 event.app_repeat -- event/event.sh@38 -- # waitforlisten 2757056 /var/tmp/spdk-nbd.sock 00:05:08.746 12:06:01 event.app_repeat -- common/autotest_common.sh@831 -- # '[' -z 2757056 ']' 00:05:08.746 12:06:01 event.app_repeat -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.746 12:06:01 event.app_repeat -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:08.746 12:06:01 event.app_repeat -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.746 12:06:01 event.app_repeat -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:08.746 12:06:01 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@864 -- # return 0 00:05:09.005 12:06:02 event.app_repeat -- event/event.sh@39 -- # killprocess 2757056 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@950 -- # '[' -z 2757056 ']' 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@954 -- # kill -0 2757056 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@955 -- # uname 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2757056 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2757056' 00:05:09.005 killing process with pid 2757056 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@969 -- # kill 2757056 00:05:09.005 12:06:02 event.app_repeat -- common/autotest_common.sh@974 -- # wait 2757056 00:05:09.264 spdk_app_start is called in Round 0. 00:05:09.264 Shutdown signal received, stop current app iteration 00:05:09.264 Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 reinitialization... 00:05:09.264 spdk_app_start is called in Round 1. 00:05:09.265 Shutdown signal received, stop current app iteration 00:05:09.265 Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 reinitialization... 00:05:09.265 spdk_app_start is called in Round 2. 00:05:09.265 Shutdown signal received, stop current app iteration 00:05:09.265 Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 reinitialization... 00:05:09.265 spdk_app_start is called in Round 3. 00:05:09.265 Shutdown signal received, stop current app iteration 00:05:09.265 12:06:02 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:09.265 12:06:02 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:09.265 00:05:09.265 real 0m18.065s 00:05:09.265 user 0m39.016s 00:05:09.265 sys 0m3.294s 00:05:09.265 12:06:02 event.app_repeat -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:09.265 12:06:02 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 ************************************ 00:05:09.265 END TEST app_repeat 00:05:09.265 ************************************ 00:05:09.265 12:06:02 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:09.265 12:06:02 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.265 12:06:02 event -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.265 12:06:02 event -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.265 12:06:02 event -- common/autotest_common.sh@10 -- # set +x 00:05:09.265 ************************************ 00:05:09.265 START TEST cpu_locks 00:05:09.265 ************************************ 00:05:09.265 12:06:02 event.cpu_locks -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:09.265 * Looking for test storage... 00:05:09.265 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:09.265 12:06:02 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:09.265 12:06:02 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:09.524 12:06:02 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:09.524 12:06:02 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:09.524 12:06:02 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:09.524 12:06:02 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:09.524 12:06:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.524 ************************************ 00:05:09.524 START TEST default_locks 00:05:09.524 ************************************ 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@1125 -- # default_locks 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=2759609 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 2759609 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2759609 ']' 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:09.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:09.524 12:06:02 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:09.525 [2024-07-26 12:06:02.590516] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:09.525 [2024-07-26 12:06:02.590612] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759609 ] 00:05:09.525 EAL: No free 2048 kB hugepages reported on node 1 00:05:09.525 [2024-07-26 12:06:02.669495] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:09.785 [2024-07-26 12:06:02.807990] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:10.044 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.044 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 0 00:05:10.044 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 2759609 00:05:10.044 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 2759609 00:05:10.044 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:10.308 lslocks: write error 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 2759609 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@950 -- # '[' -z 2759609 ']' 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # kill -0 2759609 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # uname 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2759609 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2759609' 00:05:10.308 killing process with pid 2759609 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@969 -- # kill 2759609 00:05:10.308 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@974 -- # wait 2759609 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 2759609 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # local es=0 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2759609 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # waitforlisten 2759609 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@831 -- # '[' -z 2759609 ']' 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2759609) - No such process 00:05:10.912 ERROR: process (pid: 2759609) is no longer running 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # return 1 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@653 -- # es=1 00:05:10.912 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:10.913 00:05:10.913 real 0m1.346s 00:05:10.913 user 0m1.349s 00:05:10.913 sys 0m0.569s 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:10.913 12:06:03 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.913 ************************************ 00:05:10.913 END TEST default_locks 00:05:10.913 ************************************ 00:05:10.913 12:06:03 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:10.913 12:06:03 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:10.913 12:06:03 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:10.913 12:06:03 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:10.913 ************************************ 00:05:10.913 START TEST default_locks_via_rpc 00:05:10.913 ************************************ 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1125 -- # default_locks_via_rpc 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=2759779 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 2759779 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2759779 ']' 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:10.913 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:10.913 12:06:03 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:10.913 [2024-07-26 12:06:03.988072] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:10.913 [2024-07-26 12:06:03.988179] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759779 ] 00:05:10.913 EAL: No free 2048 kB hugepages reported on node 1 00:05:10.913 [2024-07-26 12:06:04.046948] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:10.913 [2024-07-26 12:06:04.159110] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.172 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:11.172 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:11.172 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:11.172 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.172 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 2759779 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 2759779 00:05:11.431 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 2759779 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@950 -- # '[' -z 2759779 ']' 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # kill -0 2759779 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # uname 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2759779 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2759779' 00:05:11.689 killing process with pid 2759779 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@969 -- # kill 2759779 00:05:11.689 12:06:04 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@974 -- # wait 2759779 00:05:12.256 00:05:12.256 real 0m1.319s 00:05:12.256 user 0m1.264s 00:05:12.256 sys 0m0.531s 00:05:12.256 12:06:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:12.256 12:06:05 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:12.256 ************************************ 00:05:12.256 END TEST default_locks_via_rpc 00:05:12.256 ************************************ 00:05:12.256 12:06:05 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:12.256 12:06:05 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:12.256 12:06:05 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:12.256 12:06:05 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:12.256 ************************************ 00:05:12.256 START TEST non_locking_app_on_locked_coremask 00:05:12.256 ************************************ 00:05:12.256 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # non_locking_app_on_locked_coremask 00:05:12.256 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=2759945 00:05:12.256 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:12.256 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 2759945 /var/tmp/spdk.sock 00:05:12.256 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2759945 ']' 00:05:12.256 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:12.257 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.257 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:12.257 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:12.257 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.257 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.257 [2024-07-26 12:06:05.352327] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:12.257 [2024-07-26 12:06:05.352429] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2759945 ] 00:05:12.257 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.257 [2024-07-26 12:06:05.408828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.515 [2024-07-26 12:06:05.519752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=2760020 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 2760020 /var/tmp/spdk2.sock 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2760020 ']' 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:12.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:12.773 12:06:05 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:12.773 [2024-07-26 12:06:05.827652] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:12.773 [2024-07-26 12:06:05.827728] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760020 ] 00:05:12.773 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.773 [2024-07-26 12:06:05.917744] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:12.773 [2024-07-26 12:06:05.917777] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.031 [2024-07-26 12:06:06.150976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.596 12:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:13.596 12:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:13.596 12:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 2759945 00:05:13.596 12:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2759945 00:05:13.596 12:06:06 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:14.161 lslocks: write error 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 2759945 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2759945 ']' 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2759945 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2759945 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2759945' 00:05:14.161 killing process with pid 2759945 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2759945 00:05:14.161 12:06:07 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2759945 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 2760020 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2760020 ']' 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2760020 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2760020 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2760020' 00:05:15.095 killing process with pid 2760020 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2760020 00:05:15.095 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2760020 00:05:15.662 00:05:15.662 real 0m3.375s 00:05:15.662 user 0m3.530s 00:05:15.662 sys 0m1.081s 00:05:15.662 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:15.662 12:06:08 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.662 ************************************ 00:05:15.662 END TEST non_locking_app_on_locked_coremask 00:05:15.662 ************************************ 00:05:15.662 12:06:08 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:15.662 12:06:08 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:15.662 12:06:08 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:15.662 12:06:08 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:15.662 ************************************ 00:05:15.662 START TEST locking_app_on_unlocked_coremask 00:05:15.662 ************************************ 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_unlocked_coremask 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=2760843 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 2760843 /var/tmp/spdk.sock 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2760843 ']' 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:15.662 12:06:08 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:15.662 [2024-07-26 12:06:08.777246] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:15.662 [2024-07-26 12:06:08.777331] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2760843 ] 00:05:15.662 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.662 [2024-07-26 12:06:08.834204] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:15.662 [2024-07-26 12:06:08.834242] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:15.920 [2024-07-26 12:06:08.942958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=2761006 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 2761006 /var/tmp/spdk2.sock 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2761006 ']' 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:16.178 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:16.178 12:06:09 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:16.178 [2024-07-26 12:06:09.243642] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:16.178 [2024-07-26 12:06:09.243720] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761006 ] 00:05:16.178 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.178 [2024-07-26 12:06:09.334955] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:16.436 [2024-07-26 12:06:09.572616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.002 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:17.002 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:17.002 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 2761006 00:05:17.002 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2761006 00:05:17.002 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.568 lslocks: write error 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 2760843 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2760843 ']' 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2760843 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2760843 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2760843' 00:05:17.568 killing process with pid 2760843 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2760843 00:05:17.568 12:06:10 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2760843 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 2761006 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2761006 ']' 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # kill -0 2761006 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2761006 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2761006' 00:05:18.506 killing process with pid 2761006 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@969 -- # kill 2761006 00:05:18.506 12:06:11 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@974 -- # wait 2761006 00:05:18.765 00:05:18.765 real 0m3.282s 00:05:18.765 user 0m3.448s 00:05:18.765 sys 0m1.001s 00:05:18.765 12:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:18.765 12:06:12 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:18.765 ************************************ 00:05:18.765 END TEST locking_app_on_unlocked_coremask 00:05:18.765 ************************************ 00:05:19.024 12:06:12 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:19.024 12:06:12 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:19.024 12:06:12 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:19.024 12:06:12 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.024 ************************************ 00:05:19.024 START TEST locking_app_on_locked_coremask 00:05:19.024 ************************************ 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1125 -- # locking_app_on_locked_coremask 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=2761327 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 2761327 /var/tmp/spdk.sock 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2761327 ']' 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.024 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.024 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.024 [2024-07-26 12:06:12.111583] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:19.024 [2024-07-26 12:06:12.111682] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761327 ] 00:05:19.024 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.024 [2024-07-26 12:06:12.177501] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.283 [2024-07-26 12:06:12.299597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=2761414 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 2761414 /var/tmp/spdk2.sock 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2761414 /var/tmp/spdk2.sock 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2761414 /var/tmp/spdk2.sock 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@831 -- # '[' -z 2761414 ']' 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:19.541 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:19.541 12:06:12 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.541 [2024-07-26 12:06:12.607954] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:19.541 [2024-07-26 12:06:12.608030] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761414 ] 00:05:19.541 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.541 [2024-07-26 12:06:12.700765] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 2761327 has claimed it. 00:05:19.541 [2024-07-26 12:06:12.700824] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:20.117 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2761414) - No such process 00:05:20.117 ERROR: process (pid: 2761414) is no longer running 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 2761327 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 2761327 00:05:20.117 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.687 lslocks: write error 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 2761327 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@950 -- # '[' -z 2761327 ']' 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # kill -0 2761327 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # uname 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2761327 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2761327' 00:05:20.687 killing process with pid 2761327 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@969 -- # kill 2761327 00:05:20.687 12:06:13 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@974 -- # wait 2761327 00:05:20.945 00:05:20.945 real 0m2.115s 00:05:20.945 user 0m2.272s 00:05:20.945 sys 0m0.687s 00:05:20.945 12:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:20.945 12:06:14 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.945 ************************************ 00:05:20.945 END TEST locking_app_on_locked_coremask 00:05:20.945 ************************************ 00:05:20.945 12:06:14 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:20.945 12:06:14 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:20.945 12:06:14 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:20.945 12:06:14 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:21.203 ************************************ 00:05:21.203 START TEST locking_overlapped_coremask 00:05:21.203 ************************************ 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=2761621 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 2761621 /var/tmp/spdk.sock 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2761621 ']' 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:21.203 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:21.203 12:06:14 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.203 [2024-07-26 12:06:14.266507] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:21.203 [2024-07-26 12:06:14.266602] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761621 ] 00:05:21.203 EAL: No free 2048 kB hugepages reported on node 1 00:05:21.203 [2024-07-26 12:06:14.329993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:21.203 [2024-07-26 12:06:14.446944] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:21.203 [2024-07-26 12:06:14.446998] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:21.203 [2024-07-26 12:06:14.447017] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 0 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=2761753 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 2761753 /var/tmp/spdk2.sock 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # local es=0 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # valid_exec_arg waitforlisten 2761753 /var/tmp/spdk2.sock 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@638 -- # local arg=waitforlisten 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # type -t waitforlisten 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # waitforlisten 2761753 /var/tmp/spdk2.sock 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@831 -- # '[' -z 2761753 ']' 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:22.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:22.133 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:22.133 [2024-07-26 12:06:15.251819] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:22.133 [2024-07-26 12:06:15.251921] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761753 ] 00:05:22.133 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.133 [2024-07-26 12:06:15.342920] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2761621 has claimed it. 00:05:22.133 [2024-07-26 12:06:15.342984] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:23.065 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 846: kill: (2761753) - No such process 00:05:23.065 ERROR: process (pid: 2761753) is no longer running 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # return 1 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@653 -- # es=1 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 2761621 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@950 -- # '[' -z 2761621 ']' 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # kill -0 2761621 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # uname 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2761621 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2761621' 00:05:23.065 killing process with pid 2761621 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@969 -- # kill 2761621 00:05:23.065 12:06:15 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@974 -- # wait 2761621 00:05:23.323 00:05:23.323 real 0m2.229s 00:05:23.323 user 0m6.235s 00:05:23.323 sys 0m0.514s 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.323 ************************************ 00:05:23.323 END TEST locking_overlapped_coremask 00:05:23.323 ************************************ 00:05:23.323 12:06:16 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:23.323 12:06:16 event.cpu_locks -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:23.323 12:06:16 event.cpu_locks -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:23.323 12:06:16 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.323 ************************************ 00:05:23.323 START TEST locking_overlapped_coremask_via_rpc 00:05:23.323 ************************************ 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1125 -- # locking_overlapped_coremask_via_rpc 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=2761923 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 2761923 /var/tmp/spdk.sock 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2761923 ']' 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:23.323 12:06:16 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:23.323 [2024-07-26 12:06:16.533463] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:23.323 [2024-07-26 12:06:16.533534] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2761923 ] 00:05:23.323 EAL: No free 2048 kB hugepages reported on node 1 00:05:23.581 [2024-07-26 12:06:16.595516] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.581 [2024-07-26 12:06:16.595550] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:23.581 [2024-07-26 12:06:16.712346] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:23.581 [2024-07-26 12:06:16.712400] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.581 [2024-07-26 12:06:16.712417] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=2762055 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 2762055 /var/tmp/spdk2.sock 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2762055 ']' 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:24.513 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.514 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:24.514 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:24.514 12:06:17 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:24.514 [2024-07-26 12:06:17.515480] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:24.514 [2024-07-26 12:06:17.515580] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762055 ] 00:05:24.514 EAL: No free 2048 kB hugepages reported on node 1 00:05:24.514 [2024-07-26 12:06:17.604882] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:24.514 [2024-07-26 12:06:17.604928] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:24.771 [2024-07-26 12:06:17.828725] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:24.771 [2024-07-26 12:06:17.828786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:24.771 [2024-07-26 12:06:17.828788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # local es=0 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.336 [2024-07-26 12:06:18.479157] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 2761923 has claimed it. 00:05:25.336 request: 00:05:25.336 { 00:05:25.336 "method": "framework_enable_cpumask_locks", 00:05:25.336 "req_id": 1 00:05:25.336 } 00:05:25.336 Got JSON-RPC error response 00:05:25.336 response: 00:05:25.336 { 00:05:25.336 "code": -32603, 00:05:25.336 "message": "Failed to claim CPU core: 2" 00:05:25.336 } 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@653 -- # es=1 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 2761923 /var/tmp/spdk.sock 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2761923 ']' 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.336 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.336 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 2762055 /var/tmp/spdk2.sock 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@831 -- # '[' -z 2762055 ']' 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:25.593 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:25.593 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # return 0 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:25.849 00:05:25.849 real 0m2.500s 00:05:25.849 user 0m1.209s 00:05:25.849 sys 0m0.219s 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:25.849 12:06:18 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:25.849 ************************************ 00:05:25.849 END TEST locking_overlapped_coremask_via_rpc 00:05:25.849 ************************************ 00:05:25.849 12:06:19 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:25.849 12:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2761923 ]] 00:05:25.849 12:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2761923 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2761923 ']' 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2761923 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2761923 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2761923' 00:05:25.849 killing process with pid 2761923 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2761923 00:05:25.849 12:06:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2761923 00:05:26.411 12:06:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2762055 ]] 00:05:26.411 12:06:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2762055 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2762055 ']' 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2762055 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@955 -- # uname 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2762055 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2762055' 00:05:26.411 killing process with pid 2762055 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@969 -- # kill 2762055 00:05:26.411 12:06:19 event.cpu_locks -- common/autotest_common.sh@974 -- # wait 2762055 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 2761923 ]] 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 2761923 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2761923 ']' 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2761923 00:05:27.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2761923) - No such process 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2761923 is not found' 00:05:27.031 Process with pid 2761923 is not found 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 2762055 ]] 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 2762055 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@950 -- # '[' -z 2762055 ']' 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@954 -- # kill -0 2762055 00:05:27.031 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2762055) - No such process 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@977 -- # echo 'Process with pid 2762055 is not found' 00:05:27.031 Process with pid 2762055 is not found 00:05:27.031 12:06:19 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:27.031 00:05:27.031 real 0m17.503s 00:05:27.031 user 0m31.837s 00:05:27.031 sys 0m5.473s 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.031 12:06:19 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:27.031 ************************************ 00:05:27.031 END TEST cpu_locks 00:05:27.031 ************************************ 00:05:27.031 00:05:27.031 real 0m41.638s 00:05:27.031 user 1m19.943s 00:05:27.031 sys 0m9.571s 00:05:27.031 12:06:19 event -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:27.031 12:06:19 event -- common/autotest_common.sh@10 -- # set +x 00:05:27.031 ************************************ 00:05:27.031 END TEST event 00:05:27.031 ************************************ 00:05:27.031 12:06:20 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.031 12:06:20 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:27.031 12:06:20 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.031 12:06:20 -- common/autotest_common.sh@10 -- # set +x 00:05:27.031 ************************************ 00:05:27.031 START TEST thread 00:05:27.031 ************************************ 00:05:27.031 12:06:20 thread -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:27.031 * Looking for test storage... 00:05:27.031 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:27.031 12:06:20 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.031 12:06:20 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:27.031 12:06:20 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:27.031 12:06:20 thread -- common/autotest_common.sh@10 -- # set +x 00:05:27.031 ************************************ 00:05:27.031 START TEST thread_poller_perf 00:05:27.031 ************************************ 00:05:27.031 12:06:20 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:27.031 [2024-07-26 12:06:20.123927] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:27.031 [2024-07-26 12:06:20.123997] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762434 ] 00:05:27.031 EAL: No free 2048 kB hugepages reported on node 1 00:05:27.031 [2024-07-26 12:06:20.186384] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:27.288 [2024-07-26 12:06:20.301563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.288 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:28.221 ====================================== 00:05:28.221 busy:2711081226 (cyc) 00:05:28.221 total_run_count: 290000 00:05:28.221 tsc_hz: 2700000000 (cyc) 00:05:28.221 ====================================== 00:05:28.221 poller_cost: 9348 (cyc), 3462 (nsec) 00:05:28.221 00:05:28.221 real 0m1.322s 00:05:28.221 user 0m1.241s 00:05:28.221 sys 0m0.076s 00:05:28.221 12:06:21 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:28.221 12:06:21 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:28.221 ************************************ 00:05:28.221 END TEST thread_poller_perf 00:05:28.221 ************************************ 00:05:28.221 12:06:21 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.221 12:06:21 thread -- common/autotest_common.sh@1101 -- # '[' 8 -le 1 ']' 00:05:28.221 12:06:21 thread -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:28.221 12:06:21 thread -- common/autotest_common.sh@10 -- # set +x 00:05:28.479 ************************************ 00:05:28.479 START TEST thread_poller_perf 00:05:28.479 ************************************ 00:05:28.479 12:06:21 thread.thread_poller_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:28.479 [2024-07-26 12:06:21.496389] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:28.479 [2024-07-26 12:06:21.496451] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762589 ] 00:05:28.479 EAL: No free 2048 kB hugepages reported on node 1 00:05:28.479 [2024-07-26 12:06:21.557793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:28.479 [2024-07-26 12:06:21.671717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:28.479 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:29.854 ====================================== 00:05:29.854 busy:2702534404 (cyc) 00:05:29.854 total_run_count: 3876000 00:05:29.854 tsc_hz: 2700000000 (cyc) 00:05:29.854 ====================================== 00:05:29.854 poller_cost: 697 (cyc), 258 (nsec) 00:05:29.854 00:05:29.854 real 0m1.309s 00:05:29.854 user 0m1.230s 00:05:29.854 sys 0m0.074s 00:05:29.854 12:06:22 thread.thread_poller_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.854 12:06:22 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 ************************************ 00:05:29.854 END TEST thread_poller_perf 00:05:29.854 ************************************ 00:05:29.854 12:06:22 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:29.854 00:05:29.854 real 0m2.778s 00:05:29.854 user 0m2.535s 00:05:29.854 sys 0m0.243s 00:05:29.854 12:06:22 thread -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:29.854 12:06:22 thread -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 ************************************ 00:05:29.854 END TEST thread 00:05:29.854 ************************************ 00:05:29.854 12:06:22 -- spdk/autotest.sh@184 -- # [[ 0 -eq 1 ]] 00:05:29.854 12:06:22 -- spdk/autotest.sh@189 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:29.854 12:06:22 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:29.854 12:06:22 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:29.854 12:06:22 -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 ************************************ 00:05:29.854 START TEST app_cmdline 00:05:29.854 ************************************ 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:29.854 * Looking for test storage... 00:05:29.854 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:29.854 12:06:22 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:29.854 12:06:22 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=2762895 00:05:29.854 12:06:22 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:29.854 12:06:22 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 2762895 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@831 -- # '[' -z 2762895 ']' 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:29.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:29.854 12:06:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:29.854 [2024-07-26 12:06:22.965035] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:29.854 [2024-07-26 12:06:22.965139] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2762895 ] 00:05:29.854 EAL: No free 2048 kB hugepages reported on node 1 00:05:29.854 [2024-07-26 12:06:23.025747] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:30.113 [2024-07-26 12:06:23.143096] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.678 12:06:23 app_cmdline -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:30.678 12:06:23 app_cmdline -- common/autotest_common.sh@864 -- # return 0 00:05:30.678 12:06:23 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:30.936 { 00:05:30.936 "version": "SPDK v24.09-pre git sha1 fb47d9517", 00:05:30.936 "fields": { 00:05:30.936 "major": 24, 00:05:30.936 "minor": 9, 00:05:30.936 "patch": 0, 00:05:30.936 "suffix": "-pre", 00:05:30.936 "commit": "fb47d9517" 00:05:30.936 } 00:05:30.936 } 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:30.936 12:06:24 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@650 -- # local es=0 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:30.936 12:06:24 app_cmdline -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:31.196 request: 00:05:31.196 { 00:05:31.196 "method": "env_dpdk_get_mem_stats", 00:05:31.196 "req_id": 1 00:05:31.196 } 00:05:31.196 Got JSON-RPC error response 00:05:31.196 response: 00:05:31.196 { 00:05:31.196 "code": -32601, 00:05:31.196 "message": "Method not found" 00:05:31.196 } 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@653 -- # es=1 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:05:31.196 12:06:24 app_cmdline -- app/cmdline.sh@1 -- # killprocess 2762895 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@950 -- # '[' -z 2762895 ']' 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@954 -- # kill -0 2762895 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@955 -- # uname 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2762895 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2762895' 00:05:31.196 killing process with pid 2762895 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@969 -- # kill 2762895 00:05:31.196 12:06:24 app_cmdline -- common/autotest_common.sh@974 -- # wait 2762895 00:05:31.763 00:05:31.763 real 0m2.056s 00:05:31.763 user 0m2.568s 00:05:31.763 sys 0m0.487s 00:05:31.763 12:06:24 app_cmdline -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:31.763 12:06:24 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:31.763 ************************************ 00:05:31.763 END TEST app_cmdline 00:05:31.763 ************************************ 00:05:31.763 12:06:24 -- spdk/autotest.sh@190 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:31.763 12:06:24 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:05:31.763 12:06:24 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:31.763 12:06:24 -- common/autotest_common.sh@10 -- # set +x 00:05:31.763 ************************************ 00:05:31.763 START TEST version 00:05:31.763 ************************************ 00:05:31.763 12:06:24 version -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:32.023 * Looking for test storage... 00:05:32.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:32.023 12:06:25 version -- app/version.sh@17 -- # get_header_version major 00:05:32.023 12:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.023 12:06:25 version -- app/version.sh@14 -- # cut -f2 00:05:32.023 12:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.023 12:06:25 version -- app/version.sh@17 -- # major=24 00:05:32.023 12:06:25 version -- app/version.sh@18 -- # get_header_version minor 00:05:32.023 12:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.023 12:06:25 version -- app/version.sh@14 -- # cut -f2 00:05:32.023 12:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.023 12:06:25 version -- app/version.sh@18 -- # minor=9 00:05:32.024 12:06:25 version -- app/version.sh@19 -- # get_header_version patch 00:05:32.024 12:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.024 12:06:25 version -- app/version.sh@14 -- # cut -f2 00:05:32.024 12:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.024 12:06:25 version -- app/version.sh@19 -- # patch=0 00:05:32.024 12:06:25 version -- app/version.sh@20 -- # get_header_version suffix 00:05:32.024 12:06:25 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:32.024 12:06:25 version -- app/version.sh@14 -- # cut -f2 00:05:32.024 12:06:25 version -- app/version.sh@14 -- # tr -d '"' 00:05:32.024 12:06:25 version -- app/version.sh@20 -- # suffix=-pre 00:05:32.024 12:06:25 version -- app/version.sh@22 -- # version=24.9 00:05:32.024 12:06:25 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:32.024 12:06:25 version -- app/version.sh@28 -- # version=24.9rc0 00:05:32.024 12:06:25 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:32.024 12:06:25 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:32.024 12:06:25 version -- app/version.sh@30 -- # py_version=24.9rc0 00:05:32.024 12:06:25 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:05:32.024 00:05:32.024 real 0m0.105s 00:05:32.024 user 0m0.055s 00:05:32.024 sys 0m0.071s 00:05:32.024 12:06:25 version -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:32.024 12:06:25 version -- common/autotest_common.sh@10 -- # set +x 00:05:32.024 ************************************ 00:05:32.024 END TEST version 00:05:32.024 ************************************ 00:05:32.024 12:06:25 -- spdk/autotest.sh@192 -- # '[' 0 -eq 1 ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@202 -- # uname -s 00:05:32.024 12:06:25 -- spdk/autotest.sh@202 -- # [[ Linux == Linux ]] 00:05:32.024 12:06:25 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:32.024 12:06:25 -- spdk/autotest.sh@203 -- # [[ 0 -eq 1 ]] 00:05:32.024 12:06:25 -- spdk/autotest.sh@215 -- # '[' 0 -eq 1 ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@260 -- # '[' 0 -eq 1 ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@264 -- # timing_exit lib 00:05:32.024 12:06:25 -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:32.024 12:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:32.024 12:06:25 -- spdk/autotest.sh@266 -- # '[' 0 -eq 1 ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@274 -- # '[' 0 -eq 1 ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@283 -- # '[' 1 -eq 1 ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@284 -- # export NET_TYPE 00:05:32.024 12:06:25 -- spdk/autotest.sh@287 -- # '[' tcp = rdma ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@290 -- # '[' tcp = tcp ']' 00:05:32.024 12:06:25 -- spdk/autotest.sh@291 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:32.024 12:06:25 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:32.024 12:06:25 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.024 12:06:25 -- common/autotest_common.sh@10 -- # set +x 00:05:32.024 ************************************ 00:05:32.024 START TEST nvmf_tcp 00:05:32.024 ************************************ 00:05:32.024 12:06:25 nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:32.024 * Looking for test storage... 00:05:32.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:32.024 12:06:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:05:32.024 12:06:25 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:32.024 12:06:25 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:32.024 12:06:25 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:32.024 12:06:25 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.024 12:06:25 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:32.024 ************************************ 00:05:32.024 START TEST nvmf_target_core 00:05:32.024 ************************************ 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:32.024 * Looking for test storage... 00:05:32.024 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.024 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:32.025 12:06:25 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:32.284 ************************************ 00:05:32.284 START TEST nvmf_abort 00:05:32.284 ************************************ 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:32.284 * Looking for test storage... 00:05:32.284 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:32.284 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:05:32.285 12:06:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:34.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:34.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:34.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:34.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:34.188 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:34.189 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:34.189 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:05:34.189 00:05:34.189 --- 10.0.0.2 ping statistics --- 00:05:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.189 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:34.189 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:34.189 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:05:34.189 00:05:34.189 --- 10.0.0.1 ping statistics --- 00:05:34.189 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:34.189 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=2764952 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 2764952 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@831 -- # '[' -z 2764952 ']' 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.189 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:34.189 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.189 [2024-07-26 12:06:27.423866] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:34.189 [2024-07-26 12:06:27.423952] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:34.447 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.447 [2024-07-26 12:06:27.488793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:34.447 [2024-07-26 12:06:27.600200] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:34.447 [2024-07-26 12:06:27.600261] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:34.447 [2024-07-26 12:06:27.600275] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:34.447 [2024-07-26 12:06:27.600287] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:34.447 [2024-07-26 12:06:27.600297] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:34.447 [2024-07-26 12:06:27.600384] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:34.447 [2024-07-26 12:06:27.600512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:34.448 [2024-07-26 12:06:27.600514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # return 0 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.706 [2024-07-26 12:06:27.747150] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.706 Malloc0 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.706 Delay0 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.706 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.707 [2024-07-26 12:06:27.813966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:34.707 12:06:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:34.707 EAL: No free 2048 kB hugepages reported on node 1 00:05:34.707 [2024-07-26 12:06:27.910260] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:37.237 Initializing NVMe Controllers 00:05:37.237 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:37.237 controller IO queue size 128 less than required 00:05:37.237 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:37.237 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:37.237 Initialization complete. Launching workers. 00:05:37.237 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 29850 00:05:37.237 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 29911, failed to submit 62 00:05:37.237 success 29854, unsuccess 57, failed 0 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:05:37.237 rmmod nvme_tcp 00:05:37.237 rmmod nvme_fabrics 00:05:37.237 rmmod nvme_keyring 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 2764952 ']' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 2764952 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@950 -- # '[' -z 2764952 ']' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # kill -0 2764952 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # uname 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2764952 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2764952' 00:05:37.237 killing process with pid 2764952 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@969 -- # kill 2764952 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@974 -- # wait 2764952 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:37.237 12:06:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:05:39.770 00:05:39.770 real 0m7.202s 00:05:39.770 user 0m10.733s 00:05:39.770 sys 0m2.381s 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 ************************************ 00:05:39.770 END TEST nvmf_abort 00:05:39.770 ************************************ 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:39.770 ************************************ 00:05:39.770 START TEST nvmf_ns_hotplug_stress 00:05:39.770 ************************************ 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:39.770 * Looking for test storage... 00:05:39.770 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:39.770 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:05:39.771 12:06:32 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:05:41.675 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:05:41.675 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:05:41.675 Found net devices under 0000:0a:00.0: cvl_0_0 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:05:41.675 Found net devices under 0000:0a:00.1: cvl_0_1 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:05:41.675 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:05:41.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:41.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.265 ms 00:05:41.676 00:05:41.676 --- 10.0.0.2 ping statistics --- 00:05:41.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.676 rtt min/avg/max/mdev = 0.265/0.265/0.265/0.000 ms 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:05:41.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:41.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:05:41.676 00:05:41.676 --- 10.0.0.1 ping statistics --- 00:05:41.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:41.676 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=2767182 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 2767182 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@831 -- # '[' -z 2767182 ']' 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:41.676 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:05:41.676 12:06:34 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:41.676 [2024-07-26 12:06:34.820206] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:05:41.676 [2024-07-26 12:06:34.820281] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:41.676 EAL: No free 2048 kB hugepages reported on node 1 00:05:41.676 [2024-07-26 12:06:34.886521] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:41.935 [2024-07-26 12:06:35.002522] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:41.935 [2024-07-26 12:06:35.002586] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:41.935 [2024-07-26 12:06:35.002611] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:41.935 [2024-07-26 12:06:35.002624] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:41.935 [2024-07-26 12:06:35.002636] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:41.935 [2024-07-26 12:06:35.002734] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:41.935 [2024-07-26 12:06:35.002846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:41.935 [2024-07-26 12:06:35.002850] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # return 0 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:42.868 12:06:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:42.868 [2024-07-26 12:06:36.006862] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:42.868 12:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:43.126 12:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:43.384 [2024-07-26 12:06:36.521680] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:43.384 12:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:43.643 12:06:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:43.909 Malloc0 00:05:43.909 12:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:44.215 Delay0 00:05:44.215 12:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:44.473 12:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:44.731 NULL1 00:05:44.731 12:06:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:44.989 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=2767606 00:05:44.989 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:44.989 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:44.989 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:44.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:45.246 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:45.503 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:45.504 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:45.761 true 00:05:45.761 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:45.761 12:06:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:46.019 12:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:46.277 12:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:46.277 12:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:46.277 true 00:05:46.535 12:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:46.535 12:06:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.468 Read completed with error (sct=0, sc=11) 00:05:47.468 12:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:47.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.468 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:47.468 12:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:47.468 12:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:47.725 true 00:05:47.725 12:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:47.725 12:06:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:47.983 12:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:48.240 12:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:05:48.240 12:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:05:48.498 true 00:05:48.498 12:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:48.498 12:06:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:49.429 12:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:49.429 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:49.687 12:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:05:49.687 12:06:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:05:49.944 true 00:05:49.944 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:49.944 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:50.201 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:50.457 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:05:50.457 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:05:50.715 true 00:05:50.715 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:50.715 12:06:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:51.647 12:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:51.905 12:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:05:51.905 12:06:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:05:52.162 true 00:05:52.162 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:52.162 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:52.419 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:52.677 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:05:52.677 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:05:52.677 true 00:05:52.677 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:52.677 12:06:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.608 12:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:53.608 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:53.866 12:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:05:53.866 12:06:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:05:54.123 true 00:05:54.123 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:54.123 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:54.380 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:54.638 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:05:54.638 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:05:54.895 true 00:05:54.895 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:54.895 12:06:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:55.829 12:06:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:55.829 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:56.087 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:05:56.087 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:05:56.344 true 00:05:56.344 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:56.344 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:56.601 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:56.859 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:05:56.859 12:06:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:05:57.116 true 00:05:57.116 12:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:57.116 12:06:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 12:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:05:58.048 12:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:05:58.048 12:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:05:58.305 true 00:05:58.305 12:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:58.306 12:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.563 12:06:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.820 12:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:05:58.820 12:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:05:59.102 true 00:05:59.102 12:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:05:59.102 12:06:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.032 12:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.292 12:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:00.292 12:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:00.549 true 00:06:00.549 12:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:00.549 12:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.807 12:06:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.065 12:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:01.065 12:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:01.323 true 00:06:01.323 12:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:01.323 12:06:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.261 12:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.261 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:02.518 12:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:02.518 12:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:02.778 true 00:06:02.778 12:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:02.778 12:06:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.037 12:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.037 12:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:03.037 12:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:03.295 true 00:06:03.554 12:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:03.554 12:06:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.494 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:04.494 12:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.494 12:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:04.494 12:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:04.752 true 00:06:04.752 12:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:04.752 12:06:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.010 12:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.268 12:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:05.268 12:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:05.526 true 00:06:05.526 12:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:05.526 12:06:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.463 12:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.463 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:06.721 12:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:06.721 12:06:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:06.979 true 00:06:06.979 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:06.979 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.237 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.496 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:07.496 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:07.754 true 00:06:07.754 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:07.754 12:07:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.693 12:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.693 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:08.693 12:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:08.693 12:07:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:08.951 true 00:06:09.210 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:09.210 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.210 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.467 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:09.467 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:09.725 true 00:06:09.725 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:09.725 12:07:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.661 12:07:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.920 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:10.920 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:11.178 true 00:06:11.178 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:11.178 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.436 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.694 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:11.694 12:07:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:11.952 true 00:06:11.952 12:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:11.952 12:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:12.894 12:07:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.894 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:13.154 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:13.154 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:13.154 true 00:06:13.414 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:13.414 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.675 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.937 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:13.937 12:07:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:13.937 true 00:06:13.937 12:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:13.937 12:07:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.876 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.876 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:06:15.133 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:15.133 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:15.391 Initializing NVMe Controllers 00:06:15.391 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:15.391 Controller IO queue size 128, less than required. 00:06:15.391 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.391 Controller IO queue size 128, less than required. 00:06:15.391 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:15.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:06:15.391 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:15.391 Initialization complete. Launching workers. 00:06:15.391 ======================================================== 00:06:15.391 Latency(us) 00:06:15.391 Device Information : IOPS MiB/s Average min max 00:06:15.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 844.77 0.41 79823.21 2681.77 1011783.17 00:06:15.391 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 11388.90 5.56 11206.66 2927.58 361311.54 00:06:15.391 ======================================================== 00:06:15.391 Total : 12233.67 5.97 15944.80 2681.77 1011783.17 00:06:15.391 00:06:15.391 true 00:06:15.391 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 2767606 00:06:15.391 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (2767606) - No such process 00:06:15.391 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 2767606 00:06:15.391 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.648 12:07:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:15.905 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:15.905 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:15.905 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:15.905 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:15.905 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:16.163 null0 00:06:16.163 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.163 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.163 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:16.420 null1 00:06:16.420 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.420 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.420 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:16.677 null2 00:06:16.677 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.677 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.677 12:07:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:16.934 null3 00:06:16.935 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:16.935 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:16.935 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:17.192 null4 00:06:17.192 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.192 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.192 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:17.449 null5 00:06:17.449 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.449 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.449 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:17.707 null6 00:06:17.707 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.708 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.708 12:07:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:17.966 null7 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.966 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 2771666 2771667 2771669 2771671 2771673 2771675 2771677 2771679 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:17.967 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.225 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:18.484 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:18.742 12:07:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.000 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:19.259 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:19.517 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:19.775 12:07:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.034 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.291 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.291 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.291 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.291 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.292 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.292 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.292 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.292 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:20.549 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.550 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:20.550 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:20.550 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:20.550 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:20.808 12:07:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:20.808 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.067 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.326 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:21.326 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.326 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:21.326 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.584 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:21.842 12:07:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:21.842 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:21.842 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.100 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.358 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:22.616 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:22.874 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.874 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:22.875 12:07:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:23.133 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:23.393 rmmod nvme_tcp 00:06:23.393 rmmod nvme_fabrics 00:06:23.393 rmmod nvme_keyring 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 2767182 ']' 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 2767182 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@950 -- # '[' -z 2767182 ']' 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # kill -0 2767182 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # uname 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2767182 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2767182' 00:06:23.393 killing process with pid 2767182 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@969 -- # kill 2767182 00:06:23.393 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@974 -- # wait 2767182 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:23.651 12:07:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:26.191 00:06:26.191 real 0m46.392s 00:06:26.191 user 3m31.053s 00:06:26.191 sys 0m16.190s 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:26.191 ************************************ 00:06:26.191 END TEST nvmf_ns_hotplug_stress 00:06:26.191 ************************************ 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:26.191 ************************************ 00:06:26.191 START TEST nvmf_delete_subsystem 00:06:26.191 ************************************ 00:06:26.191 12:07:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:26.191 * Looking for test storage... 00:06:26.191 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:26.191 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:06:26.192 12:07:19 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:28.097 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:28.097 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:28.097 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:28.097 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:28.098 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:28.098 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:28.098 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:06:28.098 00:06:28.098 --- 10.0.0.2 ping statistics --- 00:06:28.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.098 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:28.098 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:28.098 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.194 ms 00:06:28.098 00:06:28.098 --- 10.0.0.1 ping statistics --- 00:06:28.098 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:28.098 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=2774433 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 2774433 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@831 -- # '[' -z 2774433 ']' 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:28.098 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:28.098 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.098 [2024-07-26 12:07:21.272447] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:06:28.098 [2024-07-26 12:07:21.272511] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:28.098 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.098 [2024-07-26 12:07:21.334531] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:28.356 [2024-07-26 12:07:21.445122] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:28.356 [2024-07-26 12:07:21.445200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:28.356 [2024-07-26 12:07:21.445237] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:28.356 [2024-07-26 12:07:21.445249] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:28.356 [2024-07-26 12:07:21.445259] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:28.356 [2024-07-26 12:07:21.445315] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.356 [2024-07-26 12:07:21.445321] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # return 0 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.356 [2024-07-26 12:07:21.589532] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.356 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.356 [2024-07-26 12:07:21.605762] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.619 NULL1 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.619 Delay0 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=2774458 00:06:28.619 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:28.620 12:07:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:28.620 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.620 [2024-07-26 12:07:21.680465] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:30.573 12:07:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:30.573 12:07:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:30.573 12:07:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 starting I/O failed: -6 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 starting I/O failed: -6 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.834 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 [2024-07-26 12:07:23.858955] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f03e0 is same with the state(5) to be set 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 starting I/O failed: -6 00:06:30.835 [2024-07-26 12:07:23.860212] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f659c000c00 is same with the state(5) to be set 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.835 Read completed with error (sct=0, sc=8) 00:06:30.835 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Write completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:30.836 Read completed with error (sct=0, sc=8) 00:06:31.771 [2024-07-26 12:07:24.816415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f1ac0 is same with the state(5) to be set 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 [2024-07-26 12:07:24.860498] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f659c00d660 is same with the state(5) to be set 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Read completed with error (sct=0, sc=8) 00:06:31.771 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 [2024-07-26 12:07:24.860750] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f659c00d000 is same with the state(5) to be set 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 [2024-07-26 12:07:24.861566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f0c20 is same with the state(5) to be set 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Write completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 Read completed with error (sct=0, sc=8) 00:06:31.772 [2024-07-26 12:07:24.863268] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x18f05c0 is same with the state(5) to be set 00:06:31.772 Initializing NVMe Controllers 00:06:31.772 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:31.772 Controller IO queue size 128, less than required. 00:06:31.772 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:31.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:31.772 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:31.772 Initialization complete. Launching workers. 00:06:31.772 ======================================================== 00:06:31.772 Latency(us) 00:06:31.772 Device Information : IOPS MiB/s Average min max 00:06:31.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 185.31 0.09 902883.29 540.83 1010776.90 00:06:31.772 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 150.53 0.07 1088226.65 343.37 2003070.37 00:06:31.772 ======================================================== 00:06:31.772 Total : 335.85 0.16 985958.79 343.37 2003070.37 00:06:31.772 00:06:31.772 [2024-07-26 12:07:24.864122] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x18f1ac0 (9): Bad file descriptor 00:06:31.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:31.772 12:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:31.772 12:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:31.772 12:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2774458 00:06:31.772 12:07:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 2774458 00:06:32.340 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (2774458) - No such process 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 2774458 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # local es=0 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # valid_exec_arg wait 2774458 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@638 -- # local arg=wait 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # type -t wait 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # wait 2774458 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@653 -- # es=1 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 [2024-07-26 12:07:25.387997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=2774883 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:32.340 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:32.340 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.340 [2024-07-26 12:07:25.452638] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:32.907 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:32.907 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:32.907 12:07:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.165 12:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.165 12:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:33.165 12:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:33.732 12:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:33.732 12:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:33.732 12:07:26 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.300 12:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.300 12:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:34.301 12:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:34.868 12:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:34.868 12:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:34.868 12:07:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.436 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.436 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:35.436 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:35.436 Initializing NVMe Controllers 00:06:35.436 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:35.436 Controller IO queue size 128, less than required. 00:06:35.436 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:35.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:35.436 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:35.436 Initialization complete. Launching workers. 00:06:35.436 ======================================================== 00:06:35.436 Latency(us) 00:06:35.436 Device Information : IOPS MiB/s Average min max 00:06:35.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004134.79 1000285.17 1010991.07 00:06:35.436 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005138.65 1000268.43 1042858.88 00:06:35.436 ======================================================== 00:06:35.436 Total : 256.00 0.12 1004636.72 1000268.43 1042858.88 00:06:35.436 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 2774883 00:06:35.694 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (2774883) - No such process 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 2774883 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:35.694 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:35.694 rmmod nvme_tcp 00:06:35.694 rmmod nvme_fabrics 00:06:35.954 rmmod nvme_keyring 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 2774433 ']' 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 2774433 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@950 -- # '[' -z 2774433 ']' 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # kill -0 2774433 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # uname 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:35.954 12:07:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2774433 00:06:35.954 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:06:35.954 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:06:35.954 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2774433' 00:06:35.954 killing process with pid 2774433 00:06:35.954 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@969 -- # kill 2774433 00:06:35.954 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@974 -- # wait 2774433 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.214 12:07:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:38.124 00:06:38.124 real 0m12.356s 00:06:38.124 user 0m27.853s 00:06:38.124 sys 0m2.980s 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:38.124 ************************************ 00:06:38.124 END TEST nvmf_delete_subsystem 00:06:38.124 ************************************ 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:38.124 12:07:31 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:38.384 ************************************ 00:06:38.384 START TEST nvmf_host_management 00:06:38.384 ************************************ 00:06:38.384 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:38.384 * Looking for test storage... 00:06:38.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:06:38.385 12:07:31 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:40.287 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:40.288 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:40.288 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:40.288 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:40.288 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:40.288 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:40.288 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:06:40.288 00:06:40.288 --- 10.0.0.2 ping statistics --- 00:06:40.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.288 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:40.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:40.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.170 ms 00:06:40.288 00:06:40.288 --- 10.0.0.1 ping statistics --- 00:06:40.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:40.288 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=2777343 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 2777343 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2777343 ']' 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:40.288 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:40.288 12:07:33 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:40.547 [2024-07-26 12:07:33.580108] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:06:40.547 [2024-07-26 12:07:33.580193] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.547 EAL: No free 2048 kB hugepages reported on node 1 00:06:40.547 [2024-07-26 12:07:33.649295] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:40.547 [2024-07-26 12:07:33.768423] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:40.547 [2024-07-26 12:07:33.768484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:40.547 [2024-07-26 12:07:33.768501] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:40.547 [2024-07-26 12:07:33.768514] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:40.547 [2024-07-26 12:07:33.768526] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:40.547 [2024-07-26 12:07:33.768633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:40.547 [2024-07-26 12:07:33.768738] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:40.547 [2024-07-26 12:07:33.768798] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:06:40.547 [2024-07-26 12:07:33.768800] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.483 [2024-07-26 12:07:34.541299] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.483 Malloc0 00:06:41.483 [2024-07-26 12:07:34.601467] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=2777517 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 2777517 /var/tmp/bdevperf.sock 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@831 -- # '[' -z 2777517 ']' 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:06:41.483 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:41.483 { 00:06:41.483 "params": { 00:06:41.483 "name": "Nvme$subsystem", 00:06:41.483 "trtype": "$TEST_TRANSPORT", 00:06:41.483 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:41.483 "adrfam": "ipv4", 00:06:41.483 "trsvcid": "$NVMF_PORT", 00:06:41.483 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:41.483 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:41.483 "hdgst": ${hdgst:-false}, 00:06:41.483 "ddgst": ${ddgst:-false} 00:06:41.483 }, 00:06:41.483 "method": "bdev_nvme_attach_controller" 00:06:41.483 } 00:06:41.483 EOF 00:06:41.483 )") 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:41.483 12:07:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:41.483 "params": { 00:06:41.483 "name": "Nvme0", 00:06:41.483 "trtype": "tcp", 00:06:41.483 "traddr": "10.0.0.2", 00:06:41.483 "adrfam": "ipv4", 00:06:41.483 "trsvcid": "4420", 00:06:41.483 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:41.483 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:41.483 "hdgst": false, 00:06:41.483 "ddgst": false 00:06:41.483 }, 00:06:41.483 "method": "bdev_nvme_attach_controller" 00:06:41.483 }' 00:06:41.483 [2024-07-26 12:07:34.680645] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:06:41.484 [2024-07-26 12:07:34.680718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777517 ] 00:06:41.484 EAL: No free 2048 kB hugepages reported on node 1 00:06:41.741 [2024-07-26 12:07:34.741478] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:41.741 [2024-07-26 12:07:34.851482] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:42.000 Running I/O for 10 seconds... 00:06:42.000 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:42.000 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # return 0 00:06:42.000 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:06:42.000 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.000 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=67 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 67 -ge 100 ']' 00:06:42.259 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@62 -- # sleep 0.25 00:06:42.519 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i-- )) 00:06:42.519 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:06:42.519 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:06:42.519 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:06:42.519 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.519 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=515 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 515 -ge 100 ']' 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.520 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.520 [2024-07-26 12:07:35.596930] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.520 [2024-07-26 12:07:35.596991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597011] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.520 [2024-07-26 12:07:35.597025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597039] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.520 [2024-07-26 12:07:35.597052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597075] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:06:42.520 [2024-07-26 12:07:35.597090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xace790 is same with the state(5) to be set 00:06:42.520 [2024-07-26 12:07:35.597166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:80896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:81152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:81408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:81664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:73728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:73856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:73984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:74112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:74240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:74368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:74496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:74624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:74752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:74880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:75008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:75136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:75264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:75392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:75520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:75648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:75776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:75904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:76032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.520 [2024-07-26 12:07:35.597983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.520 [2024-07-26 12:07:35.597999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:76160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:76288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:76416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:76544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:76672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:76800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:76928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:77056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:77184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:77312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:77440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:77568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:77696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:77824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:77952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:78080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:78208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:78336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:78464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:78592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:78720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:78848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:78976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:79104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:79232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:79360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:79488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:79616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:79744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.521 [2024-07-26 12:07:35.598880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:79872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.521 [2024-07-26 12:07:35.598894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.598910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.598923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.598939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.598952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.598967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.598981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.598997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:80384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.599010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.599026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:80512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.599040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.599055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:80640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.599076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.599092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:80768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:06:42.522 [2024-07-26 12:07:35.599107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.599189] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xedf5a0 was disconnected and freed. reset controller. 00:06:42.522 [2024-07-26 12:07:35.600423] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:06:42.522 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.522 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:06:42.522 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@561 -- # xtrace_disable 00:06:42.522 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:42.522 task offset: 80896 on job bdev=Nvme0n1 fails 00:06:42.522 00:06:42.522 Latency(us) 00:06:42.522 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:42.522 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:42.522 Job: Nvme0n1 ended in about 0.40 seconds with error 00:06:42.522 Verification LBA range: start 0x0 length 0x400 00:06:42.522 Nvme0n1 : 0.40 1425.97 89.12 158.44 0.00 39271.56 2645.71 34952.53 00:06:42.522 =================================================================================================================== 00:06:42.522 Total : 1425.97 89.12 158.44 0.00 39271.56 2645.71 34952.53 00:06:42.522 [2024-07-26 12:07:35.602303] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:42.522 [2024-07-26 12:07:35.602332] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xace790 (9): Bad file descriptor 00:06:42.522 [2024-07-26 12:07:35.604433] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:06:42.522 [2024-07-26 12:07:35.604586] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:06:42.522 [2024-07-26 12:07:35.604615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:06:42.522 [2024-07-26 12:07:35.604642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:06:42.522 [2024-07-26 12:07:35.604659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:06:42.522 [2024-07-26 12:07:35.604673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:06:42.522 [2024-07-26 12:07:35.604685] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0xace790 00:06:42.522 [2024-07-26 12:07:35.604719] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xace790 (9): Bad file descriptor 00:06:42.522 [2024-07-26 12:07:35.604744] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:06:42.522 [2024-07-26 12:07:35.604758] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:06:42.522 [2024-07-26 12:07:35.604775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:06:42.522 [2024-07-26 12:07:35.604795] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:06:42.522 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:06:42.522 12:07:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 2777517 00:06:43.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (2777517) - No such process 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:06:43.463 { 00:06:43.463 "params": { 00:06:43.463 "name": "Nvme$subsystem", 00:06:43.463 "trtype": "$TEST_TRANSPORT", 00:06:43.463 "traddr": "$NVMF_FIRST_TARGET_IP", 00:06:43.463 "adrfam": "ipv4", 00:06:43.463 "trsvcid": "$NVMF_PORT", 00:06:43.463 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:06:43.463 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:06:43.463 "hdgst": ${hdgst:-false}, 00:06:43.463 "ddgst": ${ddgst:-false} 00:06:43.463 }, 00:06:43.463 "method": "bdev_nvme_attach_controller" 00:06:43.463 } 00:06:43.463 EOF 00:06:43.463 )") 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:06:43.463 12:07:36 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:06:43.463 "params": { 00:06:43.463 "name": "Nvme0", 00:06:43.463 "trtype": "tcp", 00:06:43.463 "traddr": "10.0.0.2", 00:06:43.463 "adrfam": "ipv4", 00:06:43.463 "trsvcid": "4420", 00:06:43.463 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:06:43.463 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:06:43.463 "hdgst": false, 00:06:43.463 "ddgst": false 00:06:43.463 }, 00:06:43.463 "method": "bdev_nvme_attach_controller" 00:06:43.463 }' 00:06:43.463 [2024-07-26 12:07:36.658453] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:06:43.463 [2024-07-26 12:07:36.658526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2777685 ] 00:06:43.463 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.722 [2024-07-26 12:07:36.718020] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:43.722 [2024-07-26 12:07:36.829336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:43.979 Running I/O for 1 seconds... 00:06:44.918 00:06:44.918 Latency(us) 00:06:44.918 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:44.918 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:06:44.918 Verification LBA range: start 0x0 length 0x400 00:06:44.918 Nvme0n1 : 1.03 1560.61 97.54 0.00 0.00 40367.79 8980.86 35340.89 00:06:44.918 =================================================================================================================== 00:06:44.918 Total : 1560.61 97.54 0.00 0.00 40367.79 8980.86 35340.89 00:06:45.180 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:06:45.180 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.470 rmmod nvme_tcp 00:06:45.470 rmmod nvme_fabrics 00:06:45.470 rmmod nvme_keyring 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 2777343 ']' 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 2777343 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@950 -- # '[' -z 2777343 ']' 00:06:45.470 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # kill -0 2777343 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # uname 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2777343 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2777343' 00:06:45.471 killing process with pid 2777343 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@969 -- # kill 2777343 00:06:45.471 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@974 -- # wait 2777343 00:06:45.732 [2024-07-26 12:07:38.790251] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.732 12:07:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:06:47.638 00:06:47.638 real 0m9.487s 00:06:47.638 user 0m23.530s 00:06:47.638 sys 0m2.536s 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1126 -- # xtrace_disable 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:06:47.638 ************************************ 00:06:47.638 END TEST nvmf_host_management 00:06:47.638 ************************************ 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:06:47.638 12:07:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:47.897 ************************************ 00:06:47.897 START TEST nvmf_lvol 00:06:47.897 ************************************ 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:06:47.897 * Looking for test storage... 00:06:47.897 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.897 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.898 12:07:40 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:06:49.802 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:06:49.802 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:06:49.802 Found net devices under 0000:0a:00.0: cvl_0_0 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:06:49.802 Found net devices under 0000:0a:00.1: cvl_0_1 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:49.802 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:49.803 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:50.061 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:50.061 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:06:50.061 00:06:50.061 --- 10.0.0.2 ping statistics --- 00:06:50.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.061 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:50.061 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:50.061 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.163 ms 00:06:50.061 00:06:50.061 --- 10.0.0.1 ping statistics --- 00:06:50.061 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:50.061 rtt min/avg/max/mdev = 0.163/0.163/0.163/0.000 ms 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@724 -- # xtrace_disable 00:06:50.061 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=2779891 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 2779891 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@831 -- # '[' -z 2779891 ']' 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # local max_retries=100 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:50.062 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # xtrace_disable 00:06:50.062 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.062 [2024-07-26 12:07:43.233171] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:06:50.062 [2024-07-26 12:07:43.233239] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:50.062 EAL: No free 2048 kB hugepages reported on node 1 00:06:50.062 [2024-07-26 12:07:43.295624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:50.319 [2024-07-26 12:07:43.404626] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:50.319 [2024-07-26 12:07:43.404685] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:50.319 [2024-07-26 12:07:43.404712] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:50.319 [2024-07-26 12:07:43.404724] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:50.319 [2024-07-26 12:07:43.404733] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:50.319 [2024-07-26 12:07:43.404835] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:50.319 [2024-07-26 12:07:43.404897] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:50.319 [2024-07-26 12:07:43.404899] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # return 0 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@730 -- # xtrace_disable 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:50.319 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:50.577 [2024-07-26 12:07:43.776291] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:50.577 12:07:43 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:50.834 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:06:50.834 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:06:51.093 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:06:51.093 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:06:51.351 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:06:51.918 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=b1d0d49e-c230-403e-9cfe-adecf41dd42d 00:06:51.918 12:07:44 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u b1d0d49e-c230-403e-9cfe-adecf41dd42d lvol 20 00:06:51.918 12:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=3d6115d4-d15b-477f-be95-b6b2a1477f5e 00:06:51.918 12:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:52.175 12:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 3d6115d4-d15b-477f-be95-b6b2a1477f5e 00:06:52.433 12:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:52.691 [2024-07-26 12:07:45.825199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:52.691 12:07:45 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:52.950 12:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=2780318 00:06:52.950 12:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:06:52.950 12:07:46 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:06:52.950 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.885 12:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 3d6115d4-d15b-477f-be95-b6b2a1477f5e MY_SNAPSHOT 00:06:54.142 12:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=efe508b4-6bd3-4607-9b05-28c5c7ea3e3b 00:06:54.142 12:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 3d6115d4-d15b-477f-be95-b6b2a1477f5e 30 00:06:54.710 12:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone efe508b4-6bd3-4607-9b05-28c5c7ea3e3b MY_CLONE 00:06:54.710 12:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=ce354e1f-8c7d-4a4c-9e15-9c2c5e29ee45 00:06:54.710 12:07:47 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate ce354e1f-8c7d-4a4c-9e15-9c2c5e29ee45 00:06:55.648 12:07:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 2780318 00:07:03.769 Initializing NVMe Controllers 00:07:03.769 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:03.769 Controller IO queue size 128, less than required. 00:07:03.769 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:03.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:03.769 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:03.769 Initialization complete. Launching workers. 00:07:03.769 ======================================================== 00:07:03.769 Latency(us) 00:07:03.769 Device Information : IOPS MiB/s Average min max 00:07:03.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 10741.50 41.96 11924.34 461.89 123115.32 00:07:03.769 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 10692.00 41.77 11982.21 2478.16 62857.09 00:07:03.769 ======================================================== 00:07:03.769 Total : 21433.50 83.72 11953.21 461.89 123115.32 00:07:03.769 00:07:03.769 12:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:03.769 12:07:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 3d6115d4-d15b-477f-be95-b6b2a1477f5e 00:07:04.027 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u b1d0d49e-c230-403e-9cfe-adecf41dd42d 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:04.287 rmmod nvme_tcp 00:07:04.287 rmmod nvme_fabrics 00:07:04.287 rmmod nvme_keyring 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 2779891 ']' 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 2779891 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@950 -- # '[' -z 2779891 ']' 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # kill -0 2779891 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # uname 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2779891 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2779891' 00:07:04.287 killing process with pid 2779891 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@969 -- # kill 2779891 00:07:04.287 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@974 -- # wait 2779891 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:04.546 12:07:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:07.086 00:07:07.086 real 0m18.885s 00:07:07.086 user 1m3.745s 00:07:07.086 sys 0m5.815s 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:07.086 ************************************ 00:07:07.086 END TEST nvmf_lvol 00:07:07.086 ************************************ 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.086 ************************************ 00:07:07.086 START TEST nvmf_lvs_grow 00:07:07.086 ************************************ 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:07.086 * Looking for test storage... 00:07:07.086 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.086 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:07:07.087 12:07:59 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:09.034 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:09.035 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:09.035 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:09.035 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:09.035 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:09.035 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:09.035 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.267 ms 00:07:09.035 00:07:09.035 --- 10.0.0.2 ping statistics --- 00:07:09.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.035 rtt min/avg/max/mdev = 0.267/0.267/0.267/0.000 ms 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:09.035 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:09.035 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:07:09.035 00:07:09.035 --- 10.0.0.1 ping statistics --- 00:07:09.035 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:09.035 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=2783583 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:09.035 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 2783583 00:07:09.036 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@831 -- # '[' -z 2783583 ']' 00:07:09.036 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:09.036 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:09.036 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:09.036 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:09.036 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:09.036 12:08:01 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.036 [2024-07-26 12:08:01.977004] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:09.036 [2024-07-26 12:08:01.977118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:09.036 EAL: No free 2048 kB hugepages reported on node 1 00:07:09.036 [2024-07-26 12:08:02.046624] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:09.036 [2024-07-26 12:08:02.165028] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:09.036 [2024-07-26 12:08:02.165091] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:09.036 [2024-07-26 12:08:02.165108] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:09.036 [2024-07-26 12:08:02.165122] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:09.036 [2024-07-26 12:08:02.165134] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:09.036 [2024-07-26 12:08:02.165164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # return 0 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:09.972 12:08:02 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:09.972 [2024-07-26 12:08:03.167801] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:09.972 ************************************ 00:07:09.972 START TEST lvs_grow_clean 00:07:09.972 ************************************ 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1125 -- # lvs_grow 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:09.972 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:10.542 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:10.542 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:10.542 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:10.542 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:10.542 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:10.800 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:10.800 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:10.800 12:08:03 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 lvol 150 00:07:11.060 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=38436189-71f1-445e-b695-5da068f10207 00:07:11.060 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:11.060 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:11.318 [2024-07-26 12:08:04.485280] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:11.318 [2024-07-26 12:08:04.485391] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:11.318 true 00:07:11.318 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:11.318 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:11.577 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:11.577 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:11.836 12:08:04 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 38436189-71f1-445e-b695-5da068f10207 00:07:12.096 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:12.355 [2024-07-26 12:08:05.464324] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:12.355 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2784036 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2784036 /var/tmp/bdevperf.sock 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@831 -- # '[' -z 2784036 ']' 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:12.614 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:12.614 12:08:05 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:12.614 [2024-07-26 12:08:05.771776] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:12.614 [2024-07-26 12:08:05.771850] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2784036 ] 00:07:12.614 EAL: No free 2048 kB hugepages reported on node 1 00:07:12.614 [2024-07-26 12:08:05.833266] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:12.872 [2024-07-26 12:08:05.950964] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:13.810 12:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:13.810 12:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # return 0 00:07:13.810 12:08:06 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:14.068 Nvme0n1 00:07:14.068 12:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:14.327 [ 00:07:14.327 { 00:07:14.327 "name": "Nvme0n1", 00:07:14.327 "aliases": [ 00:07:14.327 "38436189-71f1-445e-b695-5da068f10207" 00:07:14.327 ], 00:07:14.327 "product_name": "NVMe disk", 00:07:14.327 "block_size": 4096, 00:07:14.327 "num_blocks": 38912, 00:07:14.327 "uuid": "38436189-71f1-445e-b695-5da068f10207", 00:07:14.327 "assigned_rate_limits": { 00:07:14.327 "rw_ios_per_sec": 0, 00:07:14.327 "rw_mbytes_per_sec": 0, 00:07:14.327 "r_mbytes_per_sec": 0, 00:07:14.327 "w_mbytes_per_sec": 0 00:07:14.327 }, 00:07:14.327 "claimed": false, 00:07:14.327 "zoned": false, 00:07:14.327 "supported_io_types": { 00:07:14.327 "read": true, 00:07:14.327 "write": true, 00:07:14.327 "unmap": true, 00:07:14.327 "flush": true, 00:07:14.327 "reset": true, 00:07:14.327 "nvme_admin": true, 00:07:14.327 "nvme_io": true, 00:07:14.327 "nvme_io_md": false, 00:07:14.327 "write_zeroes": true, 00:07:14.327 "zcopy": false, 00:07:14.328 "get_zone_info": false, 00:07:14.328 "zone_management": false, 00:07:14.328 "zone_append": false, 00:07:14.328 "compare": true, 00:07:14.328 "compare_and_write": true, 00:07:14.328 "abort": true, 00:07:14.328 "seek_hole": false, 00:07:14.328 "seek_data": false, 00:07:14.328 "copy": true, 00:07:14.328 "nvme_iov_md": false 00:07:14.328 }, 00:07:14.328 "memory_domains": [ 00:07:14.328 { 00:07:14.328 "dma_device_id": "system", 00:07:14.328 "dma_device_type": 1 00:07:14.328 } 00:07:14.328 ], 00:07:14.328 "driver_specific": { 00:07:14.328 "nvme": [ 00:07:14.328 { 00:07:14.328 "trid": { 00:07:14.328 "trtype": "TCP", 00:07:14.328 "adrfam": "IPv4", 00:07:14.328 "traddr": "10.0.0.2", 00:07:14.328 "trsvcid": "4420", 00:07:14.328 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:14.328 }, 00:07:14.328 "ctrlr_data": { 00:07:14.328 "cntlid": 1, 00:07:14.328 "vendor_id": "0x8086", 00:07:14.328 "model_number": "SPDK bdev Controller", 00:07:14.328 "serial_number": "SPDK0", 00:07:14.328 "firmware_revision": "24.09", 00:07:14.328 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:14.328 "oacs": { 00:07:14.328 "security": 0, 00:07:14.328 "format": 0, 00:07:14.328 "firmware": 0, 00:07:14.328 "ns_manage": 0 00:07:14.328 }, 00:07:14.328 "multi_ctrlr": true, 00:07:14.328 "ana_reporting": false 00:07:14.328 }, 00:07:14.328 "vs": { 00:07:14.328 "nvme_version": "1.3" 00:07:14.328 }, 00:07:14.328 "ns_data": { 00:07:14.328 "id": 1, 00:07:14.328 "can_share": true 00:07:14.328 } 00:07:14.328 } 00:07:14.328 ], 00:07:14.328 "mp_policy": "active_passive" 00:07:14.328 } 00:07:14.328 } 00:07:14.328 ] 00:07:14.328 12:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2784296 00:07:14.328 12:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:14.328 12:08:07 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:14.328 Running I/O for 10 seconds... 00:07:15.286 Latency(us) 00:07:15.286 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:15.286 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:15.286 Nvme0n1 : 1.00 14561.00 56.88 0.00 0.00 0.00 0.00 0.00 00:07:15.286 =================================================================================================================== 00:07:15.286 Total : 14561.00 56.88 0.00 0.00 0.00 0.00 0.00 00:07:15.286 00:07:16.225 12:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:16.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:16.484 Nvme0n1 : 2.00 14751.50 57.62 0.00 0.00 0.00 0.00 0.00 00:07:16.484 =================================================================================================================== 00:07:16.484 Total : 14751.50 57.62 0.00 0.00 0.00 0.00 0.00 00:07:16.484 00:07:16.484 true 00:07:16.484 12:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:16.484 12:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:16.744 12:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:16.744 12:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:16.744 12:08:09 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 2784296 00:07:17.312 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:17.312 Nvme0n1 : 3.00 14790.33 57.77 0.00 0.00 0.00 0.00 0.00 00:07:17.312 =================================================================================================================== 00:07:17.312 Total : 14790.33 57.77 0.00 0.00 0.00 0.00 0.00 00:07:17.312 00:07:18.691 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:18.691 Nvme0n1 : 4.00 14826.00 57.91 0.00 0.00 0.00 0.00 0.00 00:07:18.691 =================================================================================================================== 00:07:18.691 Total : 14826.00 57.91 0.00 0.00 0.00 0.00 0.00 00:07:18.691 00:07:19.629 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:19.629 Nvme0n1 : 5.00 14872.80 58.10 0.00 0.00 0.00 0.00 0.00 00:07:19.629 =================================================================================================================== 00:07:19.629 Total : 14872.80 58.10 0.00 0.00 0.00 0.00 0.00 00:07:19.629 00:07:20.582 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:20.582 Nvme0n1 : 6.00 14940.33 58.36 0.00 0.00 0.00 0.00 0.00 00:07:20.582 =================================================================================================================== 00:07:20.582 Total : 14940.33 58.36 0.00 0.00 0.00 0.00 0.00 00:07:20.582 00:07:21.518 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:21.518 Nvme0n1 : 7.00 14990.86 58.56 0.00 0.00 0.00 0.00 0.00 00:07:21.518 =================================================================================================================== 00:07:21.518 Total : 14990.86 58.56 0.00 0.00 0.00 0.00 0.00 00:07:21.518 00:07:22.458 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:22.458 Nvme0n1 : 8.00 15023.88 58.69 0.00 0.00 0.00 0.00 0.00 00:07:22.458 =================================================================================================================== 00:07:22.458 Total : 15023.88 58.69 0.00 0.00 0.00 0.00 0.00 00:07:22.458 00:07:23.395 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:23.395 Nvme0n1 : 9.00 15062.67 58.84 0.00 0.00 0.00 0.00 0.00 00:07:23.395 =================================================================================================================== 00:07:23.395 Total : 15062.67 58.84 0.00 0.00 0.00 0.00 0.00 00:07:23.395 00:07:24.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.360 Nvme0n1 : 10.00 15094.60 58.96 0.00 0.00 0.00 0.00 0.00 00:07:24.360 =================================================================================================================== 00:07:24.360 Total : 15094.60 58.96 0.00 0.00 0.00 0.00 0.00 00:07:24.360 00:07:24.360 00:07:24.360 Latency(us) 00:07:24.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.360 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:24.360 Nvme0n1 : 10.01 15099.53 58.98 0.00 0.00 8472.09 4927.34 17282.09 00:07:24.360 =================================================================================================================== 00:07:24.360 Total : 15099.53 58.98 0.00 0.00 8472.09 4927.34 17282.09 00:07:24.360 0 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2784036 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@950 -- # '[' -z 2784036 ']' 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # kill -0 2784036 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # uname 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2784036 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2784036' 00:07:24.360 killing process with pid 2784036 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@969 -- # kill 2784036 00:07:24.360 Received shutdown signal, test time was about 10.000000 seconds 00:07:24.360 00:07:24.360 Latency(us) 00:07:24.360 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:24.360 =================================================================================================================== 00:07:24.360 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:24.360 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@974 -- # wait 2784036 00:07:24.619 12:08:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:24.877 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:25.135 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:25.135 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:25.394 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:25.394 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:25.394 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:25.962 [2024-07-26 12:08:18.912013] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # local es=0 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:25.963 12:08:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:25.963 request: 00:07:25.963 { 00:07:25.963 "uuid": "3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4", 00:07:25.963 "method": "bdev_lvol_get_lvstores", 00:07:25.963 "req_id": 1 00:07:25.963 } 00:07:25.963 Got JSON-RPC error response 00:07:25.963 response: 00:07:25.963 { 00:07:25.963 "code": -19, 00:07:25.963 "message": "No such device" 00:07:25.963 } 00:07:25.963 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@653 -- # es=1 00:07:25.963 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:25.963 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:25.963 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:25.963 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:26.223 aio_bdev 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 38436189-71f1-445e-b695-5da068f10207 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local bdev_name=38436189-71f1-445e-b695-5da068f10207 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@901 -- # local i 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:26.223 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:26.481 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 38436189-71f1-445e-b695-5da068f10207 -t 2000 00:07:26.741 [ 00:07:26.741 { 00:07:26.741 "name": "38436189-71f1-445e-b695-5da068f10207", 00:07:26.741 "aliases": [ 00:07:26.741 "lvs/lvol" 00:07:26.741 ], 00:07:26.741 "product_name": "Logical Volume", 00:07:26.741 "block_size": 4096, 00:07:26.741 "num_blocks": 38912, 00:07:26.741 "uuid": "38436189-71f1-445e-b695-5da068f10207", 00:07:26.741 "assigned_rate_limits": { 00:07:26.741 "rw_ios_per_sec": 0, 00:07:26.741 "rw_mbytes_per_sec": 0, 00:07:26.741 "r_mbytes_per_sec": 0, 00:07:26.741 "w_mbytes_per_sec": 0 00:07:26.741 }, 00:07:26.741 "claimed": false, 00:07:26.741 "zoned": false, 00:07:26.741 "supported_io_types": { 00:07:26.741 "read": true, 00:07:26.741 "write": true, 00:07:26.741 "unmap": true, 00:07:26.741 "flush": false, 00:07:26.741 "reset": true, 00:07:26.741 "nvme_admin": false, 00:07:26.741 "nvme_io": false, 00:07:26.741 "nvme_io_md": false, 00:07:26.741 "write_zeroes": true, 00:07:26.741 "zcopy": false, 00:07:26.741 "get_zone_info": false, 00:07:26.741 "zone_management": false, 00:07:26.741 "zone_append": false, 00:07:26.741 "compare": false, 00:07:26.741 "compare_and_write": false, 00:07:26.741 "abort": false, 00:07:26.741 "seek_hole": true, 00:07:26.741 "seek_data": true, 00:07:26.741 "copy": false, 00:07:26.741 "nvme_iov_md": false 00:07:26.741 }, 00:07:26.741 "driver_specific": { 00:07:26.741 "lvol": { 00:07:26.741 "lvol_store_uuid": "3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4", 00:07:26.741 "base_bdev": "aio_bdev", 00:07:26.741 "thin_provision": false, 00:07:26.741 "num_allocated_clusters": 38, 00:07:26.741 "snapshot": false, 00:07:26.741 "clone": false, 00:07:26.741 "esnap_clone": false 00:07:26.741 } 00:07:26.741 } 00:07:26.741 } 00:07:26.741 ] 00:07:26.741 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@907 -- # return 0 00:07:26.741 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:26.741 12:08:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:27.001 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:27.001 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:27.002 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:27.260 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:27.260 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 38436189-71f1-445e-b695-5da068f10207 00:07:27.519 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3f0bbb15-bdb5-4ff8-86c8-a31b0d2383c4 00:07:27.779 12:08:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.038 00:07:28.038 real 0m17.989s 00:07:28.038 user 0m17.493s 00:07:28.038 sys 0m1.987s 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:28.038 ************************************ 00:07:28.038 END TEST lvs_grow_clean 00:07:28.038 ************************************ 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:28.038 ************************************ 00:07:28.038 START TEST lvs_grow_dirty 00:07:28.038 ************************************ 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1125 -- # lvs_grow dirty 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:28.038 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:28.606 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:28.606 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:28.606 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:28.606 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:28.606 12:08:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:28.863 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:28.863 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:28.863 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b lvol 150 00:07:29.121 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:29.121 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:29.121 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:29.380 [2024-07-26 12:08:22.580320] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:29.380 [2024-07-26 12:08:22.580456] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:29.380 true 00:07:29.380 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:29.380 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:29.638 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:29.638 12:08:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:29.896 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:30.167 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:30.432 [2024-07-26 12:08:23.555317] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:30.432 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=2786228 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 2786228 /var/tmp/bdevperf.sock 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2786228 ']' 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:30.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:30.690 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:30.691 12:08:23 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:30.691 [2024-07-26 12:08:23.860391] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:30.691 [2024-07-26 12:08:23.860486] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2786228 ] 00:07:30.691 EAL: No free 2048 kB hugepages reported on node 1 00:07:30.691 [2024-07-26 12:08:23.921289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:30.948 [2024-07-26 12:08:24.039302] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:30.948 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:30.948 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:30.948 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:31.516 Nvme0n1 00:07:31.516 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:31.774 [ 00:07:31.774 { 00:07:31.774 "name": "Nvme0n1", 00:07:31.774 "aliases": [ 00:07:31.774 "22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62" 00:07:31.774 ], 00:07:31.774 "product_name": "NVMe disk", 00:07:31.774 "block_size": 4096, 00:07:31.774 "num_blocks": 38912, 00:07:31.774 "uuid": "22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62", 00:07:31.774 "assigned_rate_limits": { 00:07:31.774 "rw_ios_per_sec": 0, 00:07:31.774 "rw_mbytes_per_sec": 0, 00:07:31.774 "r_mbytes_per_sec": 0, 00:07:31.774 "w_mbytes_per_sec": 0 00:07:31.774 }, 00:07:31.774 "claimed": false, 00:07:31.774 "zoned": false, 00:07:31.774 "supported_io_types": { 00:07:31.774 "read": true, 00:07:31.774 "write": true, 00:07:31.774 "unmap": true, 00:07:31.774 "flush": true, 00:07:31.774 "reset": true, 00:07:31.774 "nvme_admin": true, 00:07:31.774 "nvme_io": true, 00:07:31.774 "nvme_io_md": false, 00:07:31.774 "write_zeroes": true, 00:07:31.774 "zcopy": false, 00:07:31.774 "get_zone_info": false, 00:07:31.774 "zone_management": false, 00:07:31.774 "zone_append": false, 00:07:31.774 "compare": true, 00:07:31.774 "compare_and_write": true, 00:07:31.774 "abort": true, 00:07:31.774 "seek_hole": false, 00:07:31.774 "seek_data": false, 00:07:31.774 "copy": true, 00:07:31.774 "nvme_iov_md": false 00:07:31.774 }, 00:07:31.774 "memory_domains": [ 00:07:31.774 { 00:07:31.774 "dma_device_id": "system", 00:07:31.774 "dma_device_type": 1 00:07:31.774 } 00:07:31.774 ], 00:07:31.774 "driver_specific": { 00:07:31.774 "nvme": [ 00:07:31.774 { 00:07:31.774 "trid": { 00:07:31.774 "trtype": "TCP", 00:07:31.774 "adrfam": "IPv4", 00:07:31.774 "traddr": "10.0.0.2", 00:07:31.774 "trsvcid": "4420", 00:07:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:31.774 }, 00:07:31.774 "ctrlr_data": { 00:07:31.774 "cntlid": 1, 00:07:31.774 "vendor_id": "0x8086", 00:07:31.774 "model_number": "SPDK bdev Controller", 00:07:31.774 "serial_number": "SPDK0", 00:07:31.774 "firmware_revision": "24.09", 00:07:31.774 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:31.774 "oacs": { 00:07:31.774 "security": 0, 00:07:31.774 "format": 0, 00:07:31.774 "firmware": 0, 00:07:31.774 "ns_manage": 0 00:07:31.774 }, 00:07:31.774 "multi_ctrlr": true, 00:07:31.774 "ana_reporting": false 00:07:31.774 }, 00:07:31.774 "vs": { 00:07:31.774 "nvme_version": "1.3" 00:07:31.774 }, 00:07:31.774 "ns_data": { 00:07:31.774 "id": 1, 00:07:31.774 "can_share": true 00:07:31.774 } 00:07:31.774 } 00:07:31.774 ], 00:07:31.774 "mp_policy": "active_passive" 00:07:31.774 } 00:07:31.774 } 00:07:31.774 ] 00:07:31.774 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=2786364 00:07:31.774 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:31.774 12:08:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:31.774 Running I/O for 10 seconds... 00:07:33.153 Latency(us) 00:07:33.153 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:33.153 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.153 Nvme0n1 : 1.00 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:07:33.153 =================================================================================================================== 00:07:33.153 Total : 14880.00 58.12 0.00 0.00 0.00 0.00 0.00 00:07:33.153 00:07:33.719 12:08:26 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:33.977 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:33.977 Nvme0n1 : 2.00 14976.00 58.50 0.00 0.00 0.00 0.00 0.00 00:07:33.977 =================================================================================================================== 00:07:33.977 Total : 14976.00 58.50 0.00 0.00 0.00 0.00 0.00 00:07:33.977 00:07:33.977 true 00:07:33.977 12:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:33.977 12:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:34.236 12:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:34.236 12:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:34.236 12:08:27 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 2786364 00:07:34.805 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:34.805 Nvme0n1 : 3.00 14996.00 58.58 0.00 0.00 0.00 0.00 0.00 00:07:34.805 =================================================================================================================== 00:07:34.805 Total : 14996.00 58.58 0.00 0.00 0.00 0.00 0.00 00:07:34.805 00:07:36.185 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:36.185 Nvme0n1 : 4.00 15090.25 58.95 0.00 0.00 0.00 0.00 0.00 00:07:36.185 =================================================================================================================== 00:07:36.185 Total : 15090.25 58.95 0.00 0.00 0.00 0.00 0.00 00:07:36.185 00:07:37.123 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:37.123 Nvme0n1 : 5.00 15151.20 59.18 0.00 0.00 0.00 0.00 0.00 00:07:37.123 =================================================================================================================== 00:07:37.123 Total : 15151.20 59.18 0.00 0.00 0.00 0.00 0.00 00:07:37.123 00:07:38.062 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:38.062 Nvme0n1 : 6.00 15193.50 59.35 0.00 0.00 0.00 0.00 0.00 00:07:38.062 =================================================================================================================== 00:07:38.062 Total : 15193.50 59.35 0.00 0.00 0.00 0.00 0.00 00:07:38.062 00:07:39.004 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.004 Nvme0n1 : 7.00 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:07:39.004 =================================================================================================================== 00:07:39.004 Total : 15240.00 59.53 0.00 0.00 0.00 0.00 0.00 00:07:39.004 00:07:39.983 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:39.983 Nvme0n1 : 8.00 15284.38 59.70 0.00 0.00 0.00 0.00 0.00 00:07:39.983 =================================================================================================================== 00:07:39.983 Total : 15284.38 59.70 0.00 0.00 0.00 0.00 0.00 00:07:39.983 00:07:40.919 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.919 Nvme0n1 : 9.00 15309.00 59.80 0.00 0.00 0.00 0.00 0.00 00:07:40.919 =================================================================================================================== 00:07:40.919 Total : 15309.00 59.80 0.00 0.00 0.00 0.00 0.00 00:07:40.919 00:07:41.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.858 Nvme0n1 : 10.00 15329.30 59.88 0.00 0.00 0.00 0.00 0.00 00:07:41.858 =================================================================================================================== 00:07:41.858 Total : 15329.30 59.88 0.00 0.00 0.00 0.00 0.00 00:07:41.858 00:07:41.858 00:07:41.858 Latency(us) 00:07:41.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.858 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.858 Nvme0n1 : 10.01 15330.14 59.88 0.00 0.00 8344.79 4538.97 16796.63 00:07:41.858 =================================================================================================================== 00:07:41.858 Total : 15330.14 59.88 0.00 0.00 8344.79 4538.97 16796.63 00:07:41.858 0 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 2786228 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@950 -- # '[' -z 2786228 ']' 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # kill -0 2786228 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # uname 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2786228 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2786228' 00:07:41.858 killing process with pid 2786228 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@969 -- # kill 2786228 00:07:41.858 Received shutdown signal, test time was about 10.000000 seconds 00:07:41.858 00:07:41.858 Latency(us) 00:07:41.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:41.858 =================================================================================================================== 00:07:41.858 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:41.858 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@974 -- # wait 2786228 00:07:42.117 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:42.683 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:42.941 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:42.941 12:08:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 2783583 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 2783583 00:07:43.199 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 2783583 Killed "${NVMF_APP[@]}" "$@" 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=2787699 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 2787699 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@831 -- # '[' -z 2787699 ']' 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:43.199 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:43.199 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.199 [2024-07-26 12:08:36.296310] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:43.199 [2024-07-26 12:08:36.296428] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:43.199 EAL: No free 2048 kB hugepages reported on node 1 00:07:43.199 [2024-07-26 12:08:36.361106] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:43.457 [2024-07-26 12:08:36.466223] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:43.457 [2024-07-26 12:08:36.466275] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:43.457 [2024-07-26 12:08:36.466305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:43.457 [2024-07-26 12:08:36.466316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:43.457 [2024-07-26 12:08:36.466326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:43.457 [2024-07-26 12:08:36.466373] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # return 0 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:43.457 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:43.717 [2024-07-26 12:08:36.878021] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:07:43.717 [2024-07-26 12:08:36.878183] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:07:43.717 [2024-07-26 12:08:36.878235] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:43.717 12:08:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:43.977 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 -t 2000 00:07:44.235 [ 00:07:44.235 { 00:07:44.235 "name": "22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62", 00:07:44.235 "aliases": [ 00:07:44.235 "lvs/lvol" 00:07:44.235 ], 00:07:44.235 "product_name": "Logical Volume", 00:07:44.235 "block_size": 4096, 00:07:44.235 "num_blocks": 38912, 00:07:44.235 "uuid": "22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62", 00:07:44.235 "assigned_rate_limits": { 00:07:44.235 "rw_ios_per_sec": 0, 00:07:44.235 "rw_mbytes_per_sec": 0, 00:07:44.235 "r_mbytes_per_sec": 0, 00:07:44.235 "w_mbytes_per_sec": 0 00:07:44.235 }, 00:07:44.235 "claimed": false, 00:07:44.235 "zoned": false, 00:07:44.235 "supported_io_types": { 00:07:44.235 "read": true, 00:07:44.235 "write": true, 00:07:44.235 "unmap": true, 00:07:44.235 "flush": false, 00:07:44.235 "reset": true, 00:07:44.235 "nvme_admin": false, 00:07:44.235 "nvme_io": false, 00:07:44.235 "nvme_io_md": false, 00:07:44.235 "write_zeroes": true, 00:07:44.235 "zcopy": false, 00:07:44.235 "get_zone_info": false, 00:07:44.235 "zone_management": false, 00:07:44.235 "zone_append": false, 00:07:44.235 "compare": false, 00:07:44.235 "compare_and_write": false, 00:07:44.235 "abort": false, 00:07:44.235 "seek_hole": true, 00:07:44.235 "seek_data": true, 00:07:44.235 "copy": false, 00:07:44.235 "nvme_iov_md": false 00:07:44.235 }, 00:07:44.235 "driver_specific": { 00:07:44.235 "lvol": { 00:07:44.235 "lvol_store_uuid": "95adcddf-a24a-47b0-8ee7-6ff5eca52e0b", 00:07:44.235 "base_bdev": "aio_bdev", 00:07:44.235 "thin_provision": false, 00:07:44.235 "num_allocated_clusters": 38, 00:07:44.235 "snapshot": false, 00:07:44.235 "clone": false, 00:07:44.235 "esnap_clone": false 00:07:44.235 } 00:07:44.235 } 00:07:44.235 } 00:07:44.235 ] 00:07:44.235 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:44.235 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:44.235 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:07:44.494 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:07:44.494 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:44.494 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:07:44.752 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:07:44.752 12:08:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:45.011 [2024-07-26 12:08:38.146964] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # local es=0 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:45.012 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:45.270 request: 00:07:45.270 { 00:07:45.270 "uuid": "95adcddf-a24a-47b0-8ee7-6ff5eca52e0b", 00:07:45.270 "method": "bdev_lvol_get_lvstores", 00:07:45.270 "req_id": 1 00:07:45.270 } 00:07:45.270 Got JSON-RPC error response 00:07:45.270 response: 00:07:45.270 { 00:07:45.270 "code": -19, 00:07:45.270 "message": "No such device" 00:07:45.270 } 00:07:45.270 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@653 -- # es=1 00:07:45.270 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:07:45.270 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:07:45.270 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:07:45.270 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:45.528 aio_bdev 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local bdev_name=22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # local bdev_timeout= 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@901 -- # local i 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # [[ -z '' ]] 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # bdev_timeout=2000 00:07:45.528 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:45.787 12:08:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 -t 2000 00:07:46.046 [ 00:07:46.046 { 00:07:46.046 "name": "22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62", 00:07:46.046 "aliases": [ 00:07:46.046 "lvs/lvol" 00:07:46.046 ], 00:07:46.046 "product_name": "Logical Volume", 00:07:46.046 "block_size": 4096, 00:07:46.046 "num_blocks": 38912, 00:07:46.046 "uuid": "22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62", 00:07:46.046 "assigned_rate_limits": { 00:07:46.046 "rw_ios_per_sec": 0, 00:07:46.046 "rw_mbytes_per_sec": 0, 00:07:46.046 "r_mbytes_per_sec": 0, 00:07:46.046 "w_mbytes_per_sec": 0 00:07:46.046 }, 00:07:46.046 "claimed": false, 00:07:46.046 "zoned": false, 00:07:46.046 "supported_io_types": { 00:07:46.046 "read": true, 00:07:46.046 "write": true, 00:07:46.046 "unmap": true, 00:07:46.046 "flush": false, 00:07:46.046 "reset": true, 00:07:46.046 "nvme_admin": false, 00:07:46.046 "nvme_io": false, 00:07:46.046 "nvme_io_md": false, 00:07:46.046 "write_zeroes": true, 00:07:46.046 "zcopy": false, 00:07:46.046 "get_zone_info": false, 00:07:46.046 "zone_management": false, 00:07:46.046 "zone_append": false, 00:07:46.046 "compare": false, 00:07:46.046 "compare_and_write": false, 00:07:46.046 "abort": false, 00:07:46.046 "seek_hole": true, 00:07:46.046 "seek_data": true, 00:07:46.046 "copy": false, 00:07:46.046 "nvme_iov_md": false 00:07:46.046 }, 00:07:46.046 "driver_specific": { 00:07:46.046 "lvol": { 00:07:46.046 "lvol_store_uuid": "95adcddf-a24a-47b0-8ee7-6ff5eca52e0b", 00:07:46.046 "base_bdev": "aio_bdev", 00:07:46.046 "thin_provision": false, 00:07:46.046 "num_allocated_clusters": 38, 00:07:46.046 "snapshot": false, 00:07:46.046 "clone": false, 00:07:46.046 "esnap_clone": false 00:07:46.047 } 00:07:46.047 } 00:07:46.047 } 00:07:46.047 ] 00:07:46.047 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@907 -- # return 0 00:07:46.047 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:46.047 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:46.306 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:46.306 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:46.306 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:46.566 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:46.566 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 22bd7ed1-065d-4a5b-adbd-e33bfb7d8b62 00:07:46.824 12:08:39 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 95adcddf-a24a-47b0-8ee7-6ff5eca52e0b 00:07:47.084 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:47.342 00:07:47.342 real 0m19.256s 00:07:47.342 user 0m49.749s 00:07:47.342 sys 0m4.724s 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:47.342 ************************************ 00:07:47.342 END TEST lvs_grow_dirty 00:07:47.342 ************************************ 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # type=--id 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@809 -- # id=0 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # for n in $shm_files 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:07:47.342 nvmf_trace.0 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@823 -- # return 0 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:47.342 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:47.342 rmmod nvme_tcp 00:07:47.342 rmmod nvme_fabrics 00:07:47.600 rmmod nvme_keyring 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 2787699 ']' 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 2787699 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@950 -- # '[' -z 2787699 ']' 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # kill -0 2787699 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # uname 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2787699 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2787699' 00:07:47.600 killing process with pid 2787699 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@969 -- # kill 2787699 00:07:47.600 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@974 -- # wait 2787699 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:47.860 12:08:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:49.764 00:07:49.764 real 0m43.132s 00:07:49.764 user 1m13.101s 00:07:49.764 sys 0m8.558s 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:49.764 ************************************ 00:07:49.764 END TEST nvmf_lvs_grow 00:07:49.764 ************************************ 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:49.764 12:08:42 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:50.022 ************************************ 00:07:50.022 START TEST nvmf_bdev_io_wait 00:07:50.022 ************************************ 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:07:50.023 * Looking for test storage... 00:07:50.023 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:07:50.023 12:08:43 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:07:51.924 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:07:51.924 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.924 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:07:51.925 Found net devices under 0000:0a:00.0: cvl_0_0 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:07:51.925 Found net devices under 0000:0a:00.1: cvl_0_1 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:51.925 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:52.184 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:52.184 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:07:52.184 00:07:52.184 --- 10.0.0.2 ping statistics --- 00:07:52.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.184 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:52.184 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:52.184 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.094 ms 00:07:52.184 00:07:52.184 --- 10.0.0.1 ping statistics --- 00:07:52.184 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:52.184 rtt min/avg/max/mdev = 0.094/0.094/0.094/0.000 ms 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@724 -- # xtrace_disable 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=2790223 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 2790223 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@831 -- # '[' -z 2790223 ']' 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # local max_retries=100 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:52.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # xtrace_disable 00:07:52.184 12:08:45 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:52.184 [2024-07-26 12:08:45.258388] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:52.184 [2024-07-26 12:08:45.258470] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:52.184 EAL: No free 2048 kB hugepages reported on node 1 00:07:52.184 [2024-07-26 12:08:45.327208] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:52.442 [2024-07-26 12:08:45.444804] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:52.442 [2024-07-26 12:08:45.444858] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:52.442 [2024-07-26 12:08:45.444874] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:52.442 [2024-07-26 12:08:45.444887] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:52.442 [2024-07-26 12:08:45.444898] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:52.442 [2024-07-26 12:08:45.444982] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:52.442 [2024-07-26 12:08:45.445038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:52.442 [2024-07-26 12:08:45.445083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:52.442 [2024-07-26 12:08:45.445086] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # return 0 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@730 -- # xtrace_disable 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:07:53.007 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.008 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.266 [2024-07-26 12:08:46.328536] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.266 Malloc0 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:07:53.266 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:53.267 [2024-07-26 12:08:46.393805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=2790378 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=2790380 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:53.267 { 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme$subsystem", 00:07:53.267 "trtype": "$TEST_TRANSPORT", 00:07:53.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "$NVMF_PORT", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.267 "hdgst": ${hdgst:-false}, 00:07:53.267 "ddgst": ${ddgst:-false} 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 } 00:07:53.267 EOF 00:07:53.267 )") 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=2790382 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:53.267 { 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme$subsystem", 00:07:53.267 "trtype": "$TEST_TRANSPORT", 00:07:53.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "$NVMF_PORT", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.267 "hdgst": ${hdgst:-false}, 00:07:53.267 "ddgst": ${ddgst:-false} 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 } 00:07:53.267 EOF 00:07:53.267 )") 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=2790384 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:53.267 { 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme$subsystem", 00:07:53.267 "trtype": "$TEST_TRANSPORT", 00:07:53.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "$NVMF_PORT", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.267 "hdgst": ${hdgst:-false}, 00:07:53.267 "ddgst": ${ddgst:-false} 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 } 00:07:53.267 EOF 00:07:53.267 )") 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:53.267 { 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme$subsystem", 00:07:53.267 "trtype": "$TEST_TRANSPORT", 00:07:53.267 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "$NVMF_PORT", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:53.267 "hdgst": ${hdgst:-false}, 00:07:53.267 "ddgst": ${ddgst:-false} 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 } 00:07:53.267 EOF 00:07:53.267 )") 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 2790378 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme1", 00:07:53.267 "trtype": "tcp", 00:07:53.267 "traddr": "10.0.0.2", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "4420", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.267 "hdgst": false, 00:07:53.267 "ddgst": false 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 }' 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme1", 00:07:53.267 "trtype": "tcp", 00:07:53.267 "traddr": "10.0.0.2", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "4420", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.267 "hdgst": false, 00:07:53.267 "ddgst": false 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 }' 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:53.267 "params": { 00:07:53.267 "name": "Nvme1", 00:07:53.267 "trtype": "tcp", 00:07:53.267 "traddr": "10.0.0.2", 00:07:53.267 "adrfam": "ipv4", 00:07:53.267 "trsvcid": "4420", 00:07:53.267 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.267 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.267 "hdgst": false, 00:07:53.267 "ddgst": false 00:07:53.267 }, 00:07:53.267 "method": "bdev_nvme_attach_controller" 00:07:53.267 }' 00:07:53.267 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:07:53.268 12:08:46 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:53.268 "params": { 00:07:53.268 "name": "Nvme1", 00:07:53.268 "trtype": "tcp", 00:07:53.268 "traddr": "10.0.0.2", 00:07:53.268 "adrfam": "ipv4", 00:07:53.268 "trsvcid": "4420", 00:07:53.268 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:07:53.268 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:07:53.268 "hdgst": false, 00:07:53.268 "ddgst": false 00:07:53.268 }, 00:07:53.268 "method": "bdev_nvme_attach_controller" 00:07:53.268 }' 00:07:53.268 [2024-07-26 12:08:46.440726] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:53.268 [2024-07-26 12:08:46.440724] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:53.268 [2024-07-26 12:08:46.440724] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:53.268 [2024-07-26 12:08:46.440811] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:07:53.268 [2024-07-26 12:08:46.440811] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 [2024-07-26 12:08:46.440812] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib--proc-type=auto ] 00:07:53.268 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:07:53.268 [2024-07-26 12:08:46.441948] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:07:53.268 [2024-07-26 12:08:46.442016] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:07:53.268 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.526 [2024-07-26 12:08:46.612011] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.526 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.526 [2024-07-26 12:08:46.710382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:53.526 [2024-07-26 12:08:46.714369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.784 EAL: No free 2048 kB hugepages reported on node 1 00:07:53.784 [2024-07-26 12:08:46.813957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:07:53.784 [2024-07-26 12:08:46.815479] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.784 [2024-07-26 12:08:46.891807] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:53.784 [2024-07-26 12:08:46.917375] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:07:53.784 [2024-07-26 12:08:46.985809] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:07:54.042 Running I/O for 1 seconds... 00:07:54.042 Running I/O for 1 seconds... 00:07:54.042 Running I/O for 1 seconds... 00:07:54.042 Running I/O for 1 seconds... 00:07:54.977 00:07:54.977 Latency(us) 00:07:54.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.977 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:07:54.977 Nvme1n1 : 1.01 11184.98 43.69 0.00 0.00 11401.24 6310.87 21456.97 00:07:54.977 =================================================================================================================== 00:07:54.977 Total : 11184.98 43.69 0.00 0.00 11401.24 6310.87 21456.97 00:07:54.977 00:07:54.977 Latency(us) 00:07:54.977 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:54.977 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:07:54.977 Nvme1n1 : 1.02 5402.04 21.10 0.00 0.00 23407.48 9223.59 32428.18 00:07:54.977 =================================================================================================================== 00:07:54.977 Total : 5402.04 21.10 0.00 0.00 23407.48 9223.59 32428.18 00:07:55.235 00:07:55.235 Latency(us) 00:07:55.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.235 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:07:55.235 Nvme1n1 : 1.00 197222.52 770.40 0.00 0.00 646.36 270.03 849.54 00:07:55.235 =================================================================================================================== 00:07:55.235 Total : 197222.52 770.40 0.00 0.00 646.36 270.03 849.54 00:07:55.235 00:07:55.235 Latency(us) 00:07:55.235 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.235 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:07:55.236 Nvme1n1 : 1.01 5380.80 21.02 0.00 0.00 23673.95 8980.86 49710.27 00:07:55.236 =================================================================================================================== 00:07:55.236 Total : 5380.80 21.02 0.00 0.00 23673.95 8980.86 49710.27 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 2790380 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 2790382 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 2790384 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@561 -- # xtrace_disable 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:55.494 rmmod nvme_tcp 00:07:55.494 rmmod nvme_fabrics 00:07:55.494 rmmod nvme_keyring 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 2790223 ']' 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 2790223 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@950 -- # '[' -z 2790223 ']' 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # kill -0 2790223 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # uname 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2790223 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2790223' 00:07:55.494 killing process with pid 2790223 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@969 -- # kill 2790223 00:07:55.494 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@974 -- # wait 2790223 00:07:55.752 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:55.752 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:55.752 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:55.752 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:55.752 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:55.753 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:55.753 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:55.753 12:08:48 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.283 12:08:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:58.283 00:07:58.283 real 0m7.976s 00:07:58.283 user 0m20.381s 00:07:58.283 sys 0m3.532s 00:07:58.283 12:08:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1126 -- # xtrace_disable 00:07:58.283 12:08:50 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:07:58.283 ************************************ 00:07:58.283 END TEST nvmf_bdev_io_wait 00:07:58.283 ************************************ 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:58.283 ************************************ 00:07:58.283 START TEST nvmf_queue_depth 00:07:58.283 ************************************ 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:07:58.283 * Looking for test storage... 00:07:58.283 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:58.283 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:07:58.284 12:08:51 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:00.185 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:00.186 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:00.186 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:00.186 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:00.186 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:00.186 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:00.186 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.194 ms 00:08:00.186 00:08:00.186 --- 10.0.0.2 ping statistics --- 00:08:00.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.186 rtt min/avg/max/mdev = 0.194/0.194/0.194/0.000 ms 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:00.186 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:00.186 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.138 ms 00:08:00.186 00:08:00.186 --- 10.0.0.1 ping statistics --- 00:08:00.186 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:00.186 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=2792606 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 2792606 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2792606 ']' 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:00.186 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:00.186 12:08:53 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:00.186 [2024-07-26 12:08:53.259135] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:08:00.186 [2024-07-26 12:08:53.259214] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:00.186 EAL: No free 2048 kB hugepages reported on node 1 00:08:00.186 [2024-07-26 12:08:53.327842] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:00.445 [2024-07-26 12:08:53.443047] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:00.445 [2024-07-26 12:08:53.443120] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:00.445 [2024-07-26 12:08:53.443135] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:00.445 [2024-07-26 12:08:53.443162] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:00.445 [2024-07-26 12:08:53.443173] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:00.445 [2024-07-26 12:08:53.443199] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.012 [2024-07-26 12:08:54.218298] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.012 Malloc0 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.012 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.271 [2024-07-26 12:08:54.278466] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=2792761 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 2792761 /var/tmp/bdevperf.sock 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@831 -- # '[' -z 2792761 ']' 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:01.271 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:01.271 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.271 [2024-07-26 12:08:54.324028] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:08:01.271 [2024-07-26 12:08:54.324128] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2792761 ] 00:08:01.271 EAL: No free 2048 kB hugepages reported on node 1 00:08:01.271 [2024-07-26 12:08:54.384674] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:01.271 [2024-07-26 12:08:54.500347] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # return 0 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:01.529 NVMe0n1 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:01.529 12:08:54 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:01.788 Running I/O for 10 seconds... 00:08:11.784 00:08:11.784 Latency(us) 00:08:11.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.784 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:11.784 Verification LBA range: start 0x0 length 0x4000 00:08:11.784 NVMe0n1 : 10.08 8431.88 32.94 0.00 0.00 120935.83 24758.04 71846.87 00:08:11.784 =================================================================================================================== 00:08:11.784 Total : 8431.88 32.94 0.00 0.00 120935.83 24758.04 71846.87 00:08:11.784 0 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 2792761 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2792761 ']' 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2792761 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2792761 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2792761' 00:08:11.784 killing process with pid 2792761 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2792761 00:08:11.784 Received shutdown signal, test time was about 10.000000 seconds 00:08:11.784 00:08:11.784 Latency(us) 00:08:11.784 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:11.784 =================================================================================================================== 00:08:11.784 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:11.784 12:09:04 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2792761 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:12.042 rmmod nvme_tcp 00:08:12.042 rmmod nvme_fabrics 00:08:12.042 rmmod nvme_keyring 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 2792606 ']' 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 2792606 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@950 -- # '[' -z 2792606 ']' 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # kill -0 2792606 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # uname 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:12.042 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2792606 00:08:12.301 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:12.301 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:12.301 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2792606' 00:08:12.301 killing process with pid 2792606 00:08:12.301 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@969 -- # kill 2792606 00:08:12.301 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@974 -- # wait 2792606 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:12.560 12:09:05 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:14.462 00:08:14.462 real 0m16.606s 00:08:14.462 user 0m23.346s 00:08:14.462 sys 0m3.023s 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:14.462 ************************************ 00:08:14.462 END TEST nvmf_queue_depth 00:08:14.462 ************************************ 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:14.462 ************************************ 00:08:14.462 START TEST nvmf_target_multipath 00:08:14.462 ************************************ 00:08:14.462 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:08:14.720 * Looking for test storage... 00:08:14.720 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:14.720 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:14.720 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:08:14.720 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:08:14.721 12:09:07 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:16.623 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:16.623 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:16.623 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:16.623 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:16.623 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:16.624 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:16.624 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.158 ms 00:08:16.624 00:08:16.624 --- 10.0.0.2 ping statistics --- 00:08:16.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.624 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:16.624 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:16.624 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:08:16.624 00:08:16.624 --- 10.0.0.1 ping statistics --- 00:08:16.624 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:16.624 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:08:16.624 only one NIC for nvmf test 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:16.624 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:16.624 rmmod nvme_tcp 00:08:16.882 rmmod nvme_fabrics 00:08:16.882 rmmod nvme_keyring 00:08:16.882 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:16.882 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:16.882 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:16.882 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:16.882 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:16.883 12:09:09 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:18.784 00:08:18.784 real 0m4.269s 00:08:18.784 user 0m0.772s 00:08:18.784 sys 0m1.466s 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:08:18.784 ************************************ 00:08:18.784 END TEST nvmf_target_multipath 00:08:18.784 ************************************ 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:18.784 12:09:11 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:18.784 ************************************ 00:08:18.784 START TEST nvmf_zcopy 00:08:18.784 ************************************ 00:08:18.784 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:08:19.066 * Looking for test storage... 00:08:19.066 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:19.066 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:08:19.067 12:09:12 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:20.977 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:20.977 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:20.977 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:20.977 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:20.977 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:20.977 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:20.977 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:08:20.977 00:08:20.977 --- 10.0.0.2 ping statistics --- 00:08:20.977 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.977 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:20.978 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:20.978 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.092 ms 00:08:20.978 00:08:20.978 --- 10.0.0.1 ping statistics --- 00:08:20.978 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:20.978 rtt min/avg/max/mdev = 0.092/0.092/0.092/0.000 ms 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:20.978 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=2798452 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 2798452 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@831 -- # '[' -z 2798452 ']' 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:21.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:21.237 12:09:14 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:21.237 [2024-07-26 12:09:14.285865] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:08:21.237 [2024-07-26 12:09:14.285938] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:21.237 EAL: No free 2048 kB hugepages reported on node 1 00:08:21.237 [2024-07-26 12:09:14.356289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:21.237 [2024-07-26 12:09:14.473989] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:21.237 [2024-07-26 12:09:14.474055] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:21.237 [2024-07-26 12:09:14.474081] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:21.237 [2024-07-26 12:09:14.474102] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:21.237 [2024-07-26 12:09:14.474129] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:21.237 [2024-07-26 12:09:14.474164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # return 0 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 [2024-07-26 12:09:15.287056] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 [2024-07-26 12:09:15.303245] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 malloc0 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:22.171 { 00:08:22.171 "params": { 00:08:22.171 "name": "Nvme$subsystem", 00:08:22.171 "trtype": "$TEST_TRANSPORT", 00:08:22.171 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:22.171 "adrfam": "ipv4", 00:08:22.171 "trsvcid": "$NVMF_PORT", 00:08:22.171 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:22.171 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:22.171 "hdgst": ${hdgst:-false}, 00:08:22.171 "ddgst": ${ddgst:-false} 00:08:22.171 }, 00:08:22.171 "method": "bdev_nvme_attach_controller" 00:08:22.171 } 00:08:22.171 EOF 00:08:22.171 )") 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:22.171 12:09:15 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:22.171 "params": { 00:08:22.171 "name": "Nvme1", 00:08:22.171 "trtype": "tcp", 00:08:22.171 "traddr": "10.0.0.2", 00:08:22.171 "adrfam": "ipv4", 00:08:22.171 "trsvcid": "4420", 00:08:22.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:22.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:22.171 "hdgst": false, 00:08:22.171 "ddgst": false 00:08:22.171 }, 00:08:22.171 "method": "bdev_nvme_attach_controller" 00:08:22.171 }' 00:08:22.171 [2024-07-26 12:09:15.394592] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:08:22.171 [2024-07-26 12:09:15.394667] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2798607 ] 00:08:22.429 EAL: No free 2048 kB hugepages reported on node 1 00:08:22.429 [2024-07-26 12:09:15.461611] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:22.429 [2024-07-26 12:09:15.580659] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:22.687 Running I/O for 10 seconds... 00:08:34.890 00:08:34.890 Latency(us) 00:08:34.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:34.890 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:08:34.890 Verification LBA range: start 0x0 length 0x1000 00:08:34.890 Nvme1n1 : 10.02 5893.40 46.04 0.00 0.00 21659.33 3737.98 30486.38 00:08:34.890 =================================================================================================================== 00:08:34.890 Total : 5893.40 46.04 0.00 0.00 21659.33 3737.98 30486.38 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=2799919 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:08:34.890 { 00:08:34.890 "params": { 00:08:34.890 "name": "Nvme$subsystem", 00:08:34.890 "trtype": "$TEST_TRANSPORT", 00:08:34.890 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:34.890 "adrfam": "ipv4", 00:08:34.890 "trsvcid": "$NVMF_PORT", 00:08:34.890 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:34.890 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:34.890 "hdgst": ${hdgst:-false}, 00:08:34.890 "ddgst": ${ddgst:-false} 00:08:34.890 }, 00:08:34.890 "method": "bdev_nvme_attach_controller" 00:08:34.890 } 00:08:34.890 EOF 00:08:34.890 )") 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:08:34.890 [2024-07-26 12:09:26.214945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.214995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:08:34.890 12:09:26 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:08:34.890 "params": { 00:08:34.890 "name": "Nvme1", 00:08:34.890 "trtype": "tcp", 00:08:34.890 "traddr": "10.0.0.2", 00:08:34.890 "adrfam": "ipv4", 00:08:34.890 "trsvcid": "4420", 00:08:34.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:34.890 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:34.890 "hdgst": false, 00:08:34.890 "ddgst": false 00:08:34.890 }, 00:08:34.890 "method": "bdev_nvme_attach_controller" 00:08:34.890 }' 00:08:34.890 [2024-07-26 12:09:26.222901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.222928] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.230919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.230944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.238939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.238964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.246963] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.246988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.254986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.255011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.255509] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:08:34.890 [2024-07-26 12:09:26.255595] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2799919 ] 00:08:34.890 [2024-07-26 12:09:26.263009] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.263035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.271029] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.271054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.279051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.279085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.287079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.287104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 EAL: No free 2048 kB hugepages reported on node 1 00:08:34.890 [2024-07-26 12:09:26.295130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.890 [2024-07-26 12:09:26.295152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.890 [2024-07-26 12:09:26.303132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.303153] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.311145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.311170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.319167] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.319188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.324158] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:34.891 [2024-07-26 12:09:26.327196] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.327221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.335228] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.335264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.343219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.343241] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.351238] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.351259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.359264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.359286] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.367283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.367304] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.375306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.375326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.383329] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.383366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.391390] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.391439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.399417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.399448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.407424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.407450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.415432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.415467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.423468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.423494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.431492] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.431516] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.439510] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.439535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.444981] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:34.891 [2024-07-26 12:09:26.447532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.447556] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.455553] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.455577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.463604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.463642] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.471633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.471675] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.479659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.479702] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.487676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.487719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.495701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.495743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.503722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.503763] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.511742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.511783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.519732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.519758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.527784] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.527821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.535809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.535848] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.543815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.543842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.551819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.551844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.559845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.559881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.567862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.567887] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.575896] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.575926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.583917] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.583945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.591941] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.591968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.599951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.599990] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.607989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.608016] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.616014] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.616041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.624031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.624057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.632071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.632115] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.640085] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.640121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 Running I/O for 5 seconds... 00:08:34.891 [2024-07-26 12:09:26.648120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.648141] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.662046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.662084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.672984] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.673012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.684136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.891 [2024-07-26 12:09:26.684165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.891 [2024-07-26 12:09:26.697224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.697251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.707768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.707795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.718550] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.718577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.731698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.731725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.741589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.741624] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.752302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.752346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.765391] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.765420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.775511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.775539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.786269] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.786297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.799212] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.799239] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.809551] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.809578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.820777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.820804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.833924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.833966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.844523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.844551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.855227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.855255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.865705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.865733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.875900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.875927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.886286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.886313] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.897263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.897290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.908164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.908191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.919292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.919320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.930117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.930144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.941027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.941054] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.953716] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.953750] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.963690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.963717] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.974568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.974595] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.987317] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.987345] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:26.997411] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:26.997438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.008435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.008462] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.020937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.020964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.030835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.030863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.041725] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.041752] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.054467] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.054494] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.064439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.064467] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.075487] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.075515] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.086308] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.086335] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.097013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.097041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.108130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.108157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.119430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.119457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.129872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.129899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.140502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.140529] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.151223] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.151250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.161916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.161972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.172929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.172955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.892 [2024-07-26 12:09:27.185797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.892 [2024-07-26 12:09:27.185824] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.196036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.196070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.207036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.207071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.219277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.219305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.228676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.228703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.240124] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.240151] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.251337] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.251364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.261802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.261829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.272580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.272608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.283448] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.283475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.294471] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.294498] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.305580] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.305607] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.318203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.318230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.328132] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.328159] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.338863] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.338890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.349635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.349678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.360591] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.360618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.371694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.371721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.382532] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.382559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.393268] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.393295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.403900] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.403926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.415183] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.415211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.426125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.426152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.437137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.437164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.447813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.447840] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.458250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.458278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.469793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.469821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.480588] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.480615] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.491431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.491458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.504412] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.504439] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.514732] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.514758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.525443] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.525470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.536189] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.536216] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.546431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.546457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.556848] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.556875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.567782] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.567809] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.580462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.580489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.590697] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.590723] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.601084] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.601126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.612286] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.612312] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.622985] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.623012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.633536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.633564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.646130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.646157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.656117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.656144] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.666718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.666745] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.679876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.679903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.689172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.689199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.702385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.702412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.712613] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.712640] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.723549] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.723575] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.735921] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.735947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.746178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.746205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.757116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.757143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.769605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.769631] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.779873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.779916] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.790752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.790778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.803398] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.803425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.812730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.812757] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.823986] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.824013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.834893] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.834919] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.847316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.847342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.856830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.856857] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.868065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.868092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.880916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.880944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.890983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.891010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.901120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.901148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.912292] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.912326] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.922952] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.922979] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.933747] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.933774] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.944336] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.944363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.955803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.955831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.966193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.966221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.976950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.976978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.989871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.989906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:27.999936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:27.999964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.010431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.010461] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.021355] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.021383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.032111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.032149] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.045105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.045132] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.054463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.054490] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.066019] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.066046] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.076554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.076581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.087320] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.087347] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.099832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.099859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.109914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.109941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.120801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.120828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:34.893 [2024-07-26 12:09:28.133333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:34.893 [2024-07-26 12:09:28.133360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.145776] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.145806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.155365] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.155392] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.166149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.166176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.176680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.176706] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.189015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.189042] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.199023] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.199066] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.209245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.209273] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.219400] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.219427] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.229819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.229847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.240080] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.240116] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.250429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.250456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.262773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.262800] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.272771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.272798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.282869] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.282896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.293107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.293134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.303455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.303482] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.314142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.314169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.324478] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.324505] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.335389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.335416] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.345651] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.345678] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.356050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.356085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.366234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.366262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.376873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.376901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.389333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.389360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.152 [2024-07-26 12:09:28.399220] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.152 [2024-07-26 12:09:28.399257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.409766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.409795] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.420139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.420166] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.430515] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.430542] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.441233] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.441260] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.452134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.452160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.463199] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.463227] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.473744] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.473771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.484771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.484798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.497620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.497647] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.507948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.507975] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.518752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.518779] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.529531] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.529574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.540202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.540229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.550937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.550964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.561794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.561821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.572801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.572828] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.585612] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.585639] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.595146] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.595173] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.606182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.606217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.616518] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.616545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.627174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.627202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.638182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.638209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.649130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.649157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.412 [2024-07-26 12:09:28.659777] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.412 [2024-07-26 12:09:28.659803] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.672 [2024-07-26 12:09:28.670953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.670984] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.683280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.683307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.693346] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.693373] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.703684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.703710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.714344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.714371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.724673] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.724699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.735180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.735207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.745803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.745831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.756901] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.756927] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.767552] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.767579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.778657] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.778683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.789513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.789540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.802219] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.802246] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.812227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.812262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.822937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.822964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.835099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.835126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.844396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.844423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.855593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.855619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.866765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.866791] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.877983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.878009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.889109] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.889136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.901940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.901967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.912037] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.912071] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.673 [2024-07-26 12:09:28.922472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.673 [2024-07-26 12:09:28.922499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:28.933101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:28.933129] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:28.945592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:28.945619] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:28.955522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:28.955549] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:28.966542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:28.966570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:28.979127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:28.979154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:28.988576] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:28.988603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.000072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.000100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.010793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.010820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.021543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.021571] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.032638] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.032666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.043658] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.043686] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.054536] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.054564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.065205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.065233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.076120] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.076148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.086741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.086769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.097326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.097354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.107652] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.107680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.118259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.118287] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.128972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.129000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.139625] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.139653] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.150520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.150548] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.161664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.161692] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.172646] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.172674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:35.933 [2024-07-26 12:09:29.183304] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:35.933 [2024-07-26 12:09:29.183332] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.192 [2024-07-26 12:09:29.194140] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.194167] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.206785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.206812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.216410] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.216438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.227847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.227875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.238915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.238943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.249916] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.249944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.261179] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.261207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.271796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.271823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.282419] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.282446] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.293392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.293419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.304479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.304506] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.317188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.317215] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.327567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.327593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.338170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.338197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.348598] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.348625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.359556] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.359582] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.372301] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.372328] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.382731] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.382758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.394099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.394126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.407170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.407197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.417310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.417337] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.428077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.428103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.193 [2024-07-26 12:09:29.440821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.193 [2024-07-26 12:09:29.440849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.451001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.451030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.461129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.461156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.471430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.471457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.482193] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.482226] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.492589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.492616] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.502937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.502963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.515709] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.515736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.525601] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.525629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.535645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.535672] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.546122] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.546150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.556325] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.556353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.566422] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.566449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.576706] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.576733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.586939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.586966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.597277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.597314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.607472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.607499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.617464] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.617491] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.627611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.627648] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.637741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.637768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.647939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.647967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.658224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.658251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.668385] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.668412] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.678526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.678553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.688766] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.688793] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.453 [2024-07-26 12:09:29.698888] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.453 [2024-07-26 12:09:29.698915] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.709209] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.709237] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.719380] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.719407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.729693] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.729720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.740462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.740489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.752705] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.752731] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.762208] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.762235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.775534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.775561] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.786267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.786294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.796964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.796991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.807936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.807962] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.818643] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.818669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.829300] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.829336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.840333] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.840360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.851457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.851484] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.862763] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.862806] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.873537] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.873564] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.884470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.884497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.896958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.896985] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.907392] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.907419] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.917959] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.917986] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.928360] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.928386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.939098] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.939125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.949633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.949659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.714 [2024-07-26 12:09:29.960676] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.714 [2024-07-26 12:09:29.960703] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.973 [2024-07-26 12:09:29.971809] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.973 [2024-07-26 12:09:29.971837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:29.984671] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:29.984698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:29.995021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:29.995051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.007148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.007177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.018644] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.018679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.031620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.031657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.042031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.042086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.052628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.052656] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.063161] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.063188] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.074152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.074180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.087170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.087197] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.098919] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.098950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.108396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.108423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.120635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.120662] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.131396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.131423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.142243] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.142270] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.155021] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.155048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.164754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.164782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.175713] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.175741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.188654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.188682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.199227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.199255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.210225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.210252] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:36.974 [2024-07-26 12:09:30.222995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:36.974 [2024-07-26 12:09:30.223022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.233 [2024-07-26 12:09:30.233358] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.233389] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.244633] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.244664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.257348] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.257388] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.268076] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.268134] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.279007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.279034] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.291815] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.291842] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.301852] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.301879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.312927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.312954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.323950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.323976] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.334923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.334950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.347327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.347354] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.357526] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.357553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.368338] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.368364] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.380939] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.380966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.391427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.391453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.402465] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.402492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.415623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.415650] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.425991] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.426018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.436513] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.436540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.447151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.447178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.457502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.457528] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.468483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.468519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.234 [2024-07-26 12:09:30.479070] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.234 [2024-07-26 12:09:30.479096] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.489932] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.489960] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.502853] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.502881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.513429] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.513456] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.524312] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.524339] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.537171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.537198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.547557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.547585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.558215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.558242] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.569111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.569139] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.580237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.580264] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.592951] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.592978] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.603356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.603383] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.614170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.614198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.627142] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.627170] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.637445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.637488] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.647999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.648029] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.658425] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.658452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.669264] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.669291] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.680079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.680106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.692722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.692749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.702818] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.702845] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.713833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.713860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.725322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.725350] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.493 [2024-07-26 12:09:30.736506] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.493 [2024-07-26 12:09:30.736534] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.746962] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.746994] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.757971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.758002] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.768899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.768926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.779666] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.779694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.790641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.790669] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.801791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.801819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.812999] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.813026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.824007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.824035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.835235] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.835262] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.848255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.848283] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.858523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.858566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.869587] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.869618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.880543] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.880570] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.891468] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.891495] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.753 [2024-07-26 12:09:30.904017] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.753 [2024-07-26 12:09:30.904044] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.913821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.913849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.924572] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.924598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.935191] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.935218] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.946367] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.946394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.957048] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.957084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.969593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.969620] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.979470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.979513] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:30.990266] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:30.990293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:37.754 [2024-07-26 12:09:31.002931] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:37.754 [2024-07-26 12:09:31.002958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.020567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.020597] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.031263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.031290] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.041938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.041964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.052987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.053014] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.064075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.064101] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.077153] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.077181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.087053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.087087] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.098408] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.098435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.109204] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.109231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.120156] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.120183] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.130877] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.130903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.141688] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.141715] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.154736] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.154766] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.165418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.165449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.176958] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.176988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.187936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.187964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.199376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.199403] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.210224] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.210251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.221046] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.221085] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.232131] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.232158] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.244926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.244956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.255099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.013 [2024-07-26 12:09:31.255126] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.013 [2024-07-26 12:09:31.266163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.014 [2024-07-26 12:09:31.266190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.277307] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.277346] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.288230] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.288257] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.299323] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.299357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.310753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.310792] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.323813] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.323844] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.333791] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.333823] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.345205] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.345234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.356751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.356782] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.369718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.369749] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.379960] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.379991] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.390768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.390799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.403961] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.403988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.414352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.414380] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.425370] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.425397] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.436735] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.436765] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.448001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.448031] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.461368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.461394] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.472036] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.472069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.483438] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.483466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.495082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.495109] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.506695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.506725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.274 [2024-07-26 12:09:31.518130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.274 [2024-07-26 12:09:31.518157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.529481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.529521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.540726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.540756] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.553568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.553598] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.563780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.563810] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.575078] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.575105] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.588244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.588271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.598745] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.598776] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.609871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.609898] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.535 [2024-07-26 12:09:31.620645] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.535 [2024-07-26 12:09:31.620676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.631751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.631777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.644664] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.644694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.655013] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.655039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.664788] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.664814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 00:08:38.536 Latency(us) 00:08:38.536 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:38.536 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:08:38.536 Nvme1n1 : 5.01 11765.23 91.92 0.00 0.00 10865.21 4903.06 22039.51 00:08:38.536 =================================================================================================================== 00:08:38.536 Total : 11765.23 91.92 0.00 0.00 10865.21 4903.06 22039.51 00:08:38.536 [2024-07-26 12:09:31.669414] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.669438] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.677562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.677603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.685557] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.685583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.693622] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.693684] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.701674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.701727] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.709694] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.709743] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.717718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.717769] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.725726] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.725777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.733762] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.733815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.741764] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.741831] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.749796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.749846] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.757820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.757872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.765847] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.765900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.773860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.773911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.536 [2024-07-26 12:09:31.781881] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.536 [2024-07-26 12:09:31.781931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.789899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.789948] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.797922] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.797972] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.805947] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.805995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.813965] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.814009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.821946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.821971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.829968] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.829993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.837994] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.838022] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.846015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.846056] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.854086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.854127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.862128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.862178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.870164] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.870209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.878125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.878148] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.886141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.886163] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.894158] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.894180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.902178] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.902200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.910242] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.910288] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.918265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.918315] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.926282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.926340] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.934250] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.934272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.942272] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.942294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 [2024-07-26 12:09:31.950293] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:08:38.804 [2024-07-26 12:09:31.950318] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:38.804 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (2799919) - No such process 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 2799919 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 delay0 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:38.804 12:09:31 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:08:38.804 EAL: No free 2048 kB hugepages reported on node 1 00:08:39.065 [2024-07-26 12:09:32.065154] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:08:45.635 Initializing NVMe Controllers 00:08:45.635 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:08:45.635 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:08:45.635 Initialization complete. Launching workers. 00:08:45.635 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 117 00:08:45.635 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 404, failed to submit 33 00:08:45.635 success 210, unsuccess 194, failed 0 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:45.635 rmmod nvme_tcp 00:08:45.635 rmmod nvme_fabrics 00:08:45.635 rmmod nvme_keyring 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 2798452 ']' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 2798452 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@950 -- # '[' -z 2798452 ']' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # kill -0 2798452 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # uname 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2798452 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2798452' 00:08:45.635 killing process with pid 2798452 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@969 -- # kill 2798452 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@974 -- # wait 2798452 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:45.635 12:09:38 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:47.590 00:08:47.590 real 0m28.586s 00:08:47.590 user 0m42.240s 00:08:47.590 sys 0m8.195s 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 ************************************ 00:08:47.590 END TEST nvmf_zcopy 00:08:47.590 ************************************ 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:47.590 ************************************ 00:08:47.590 START TEST nvmf_nmic 00:08:47.590 ************************************ 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:47.590 * Looking for test storage... 00:08:47.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:47.590 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:08:47.591 12:09:40 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:08:49.499 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:08:49.499 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:08:49.499 Found net devices under 0000:0a:00.0: cvl_0_0 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.499 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:08:49.500 Found net devices under 0000:0a:00.1: cvl_0_1 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:49.500 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:49.758 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:49.758 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:49.758 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:49.758 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.758 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.120 ms 00:08:49.758 00:08:49.758 --- 10.0.0.2 ping statistics --- 00:08:49.758 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.758 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:08:49.758 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:49.758 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.758 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.090 ms 00:08:49.758 00:08:49.758 --- 10.0.0.1 ping statistics --- 00:08:49.759 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.759 rtt min/avg/max/mdev = 0.090/0.090/0.090/0.000 ms 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@724 -- # xtrace_disable 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=2803306 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 2803306 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@831 -- # '[' -z 2803306 ']' 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # local max_retries=100 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # xtrace_disable 00:08:49.759 12:09:42 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.759 [2024-07-26 12:09:42.848798] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:08:49.759 [2024-07-26 12:09:42.848894] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.759 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.759 [2024-07-26 12:09:42.920343] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:50.019 [2024-07-26 12:09:43.042529] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:50.019 [2024-07-26 12:09:43.042599] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:50.019 [2024-07-26 12:09:43.042616] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:50.019 [2024-07-26 12:09:43.042630] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:50.019 [2024-07-26 12:09:43.042642] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:50.019 [2024-07-26 12:09:43.042730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.019 [2024-07-26 12:09:43.042788] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:50.019 [2024-07-26 12:09:43.042841] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:08:50.019 [2024-07-26 12:09:43.042844] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # return 0 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@730 -- # xtrace_disable 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.586 [2024-07-26 12:09:43.824777] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.586 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 Malloc0 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 [2024-07-26 12:09:43.876980] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:50.846 test case1: single bdev can't be used in multiple subsystems 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.846 [2024-07-26 12:09:43.900828] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:50.846 [2024-07-26 12:09:43.900856] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:50.846 [2024-07-26 12:09:43.900871] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.846 request: 00:08:50.846 { 00:08:50.846 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.846 "namespace": { 00:08:50.846 "bdev_name": "Malloc0", 00:08:50.846 "no_auto_visible": false 00:08:50.846 }, 00:08:50.846 "method": "nvmf_subsystem_add_ns", 00:08:50.846 "req_id": 1 00:08:50.846 } 00:08:50.846 Got JSON-RPC error response 00:08:50.846 response: 00:08:50.846 { 00:08:50.846 "code": -32602, 00:08:50.846 "message": "Invalid parameters" 00:08:50.846 } 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:50.846 Adding namespace failed - expected result. 00:08:50.846 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:50.846 test case2: host connect to nvmf target in multiple paths 00:08:50.847 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:50.847 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@561 -- # xtrace_disable 00:08:50.847 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.847 [2024-07-26 12:09:43.908945] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:50.847 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:08:50.847 12:09:43 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.416 12:09:44 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:51.984 12:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:51.984 12:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:08:51.984 12:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:08:51.984 12:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:08:51.984 12:09:45 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:08:54.521 12:09:47 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:54.521 [global] 00:08:54.521 thread=1 00:08:54.521 invalidate=1 00:08:54.521 rw=write 00:08:54.521 time_based=1 00:08:54.521 runtime=1 00:08:54.521 ioengine=libaio 00:08:54.521 direct=1 00:08:54.521 bs=4096 00:08:54.521 iodepth=1 00:08:54.521 norandommap=0 00:08:54.521 numjobs=1 00:08:54.521 00:08:54.521 verify_dump=1 00:08:54.521 verify_backlog=512 00:08:54.521 verify_state_save=0 00:08:54.521 do_verify=1 00:08:54.521 verify=crc32c-intel 00:08:54.521 [job0] 00:08:54.521 filename=/dev/nvme0n1 00:08:54.521 Could not set queue depth (nvme0n1) 00:08:54.521 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.521 fio-3.35 00:08:54.521 Starting 1 thread 00:08:55.455 00:08:55.455 job0: (groupid=0, jobs=1): err= 0: pid=2803951: Fri Jul 26 12:09:48 2024 00:08:55.455 read: IOPS=515, BW=2061KiB/s (2110kB/s)(2108KiB/1023msec) 00:08:55.455 slat (nsec): min=4333, max=51085, avg=13963.40, stdev=10230.47 00:08:55.455 clat (usec): min=292, max=41065, avg=1517.45, stdev=6760.01 00:08:55.455 lat (usec): min=298, max=41081, avg=1531.42, stdev=6761.74 00:08:55.455 clat percentiles (usec): 00:08:55.455 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 322], 00:08:55.455 | 30.00th=[ 326], 40.00th=[ 338], 50.00th=[ 367], 60.00th=[ 375], 00:08:55.455 | 70.00th=[ 383], 80.00th=[ 400], 90.00th=[ 433], 95.00th=[ 482], 00:08:55.455 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:08:55.455 | 99.99th=[41157] 00:08:55.455 write: IOPS=1000, BW=4004KiB/s (4100kB/s)(4096KiB/1023msec); 0 zone resets 00:08:55.455 slat (nsec): min=5452, max=46136, avg=11516.48, stdev=5789.72 00:08:55.455 clat (usec): min=159, max=386, avg=193.71, stdev=28.40 00:08:55.455 lat (usec): min=165, max=425, avg=205.22, stdev=31.35 00:08:55.455 clat percentiles (usec): 00:08:55.455 | 1.00th=[ 163], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 174], 00:08:55.455 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:08:55.455 | 70.00th=[ 196], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 241], 00:08:55.455 | 99.00th=[ 281], 99.50th=[ 338], 99.90th=[ 379], 99.95th=[ 388], 00:08:55.455 | 99.99th=[ 388] 00:08:55.455 bw ( KiB/s): min= 8192, max= 8192, per=100.00%, avg=8192.00, stdev= 0.00, samples=1 00:08:55.455 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:08:55.455 lat (usec) : 250=64.09%, 500=34.56%, 750=0.39% 00:08:55.455 lat (msec) : 50=0.97% 00:08:55.455 cpu : usr=0.88%, sys=1.96%, ctx=1551, majf=0, minf=2 00:08:55.455 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.455 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.455 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.455 issued rwts: total=527,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.455 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.455 00:08:55.455 Run status group 0 (all jobs): 00:08:55.455 READ: bw=2061KiB/s (2110kB/s), 2061KiB/s-2061KiB/s (2110kB/s-2110kB/s), io=2108KiB (2159kB), run=1023-1023msec 00:08:55.455 WRITE: bw=4004KiB/s (4100kB/s), 4004KiB/s-4004KiB/s (4100kB/s-4100kB/s), io=4096KiB (4194kB), run=1023-1023msec 00:08:55.455 00:08:55.455 Disk stats (read/write): 00:08:55.455 nvme0n1: ios=573/1024, merge=0/0, ticks=748/185, in_queue=933, util=96.19% 00:08:55.455 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.714 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:55.714 rmmod nvme_tcp 00:08:55.714 rmmod nvme_fabrics 00:08:55.714 rmmod nvme_keyring 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 2803306 ']' 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 2803306 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@950 -- # '[' -z 2803306 ']' 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # kill -0 2803306 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # uname 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:08:55.714 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2803306 00:08:55.715 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:08:55.715 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:08:55.715 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2803306' 00:08:55.715 killing process with pid 2803306 00:08:55.715 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@969 -- # kill 2803306 00:08:55.715 12:09:48 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@974 -- # wait 2803306 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:55.974 12:09:49 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:58.517 00:08:58.517 real 0m10.522s 00:08:58.517 user 0m25.202s 00:08:58.517 sys 0m2.301s 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1126 -- # xtrace_disable 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.517 ************************************ 00:08:58.517 END TEST nvmf_nmic 00:08:58.517 ************************************ 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.517 ************************************ 00:08:58.517 START TEST nvmf_fio_target 00:08:58.517 ************************************ 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:58.517 * Looking for test storage... 00:08:58.517 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.517 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:08:58.518 12:09:51 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:00.424 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:00.424 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:00.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:00.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:00.424 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:00.425 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:00.425 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:09:00.425 00:09:00.425 --- 10.0.0.2 ping statistics --- 00:09:00.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.425 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:00.425 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:00.425 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.121 ms 00:09:00.425 00:09:00.425 --- 10.0.0.1 ping statistics --- 00:09:00.425 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:00.425 rtt min/avg/max/mdev = 0.121/0.121/0.121/0.000 ms 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=2806029 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 2806029 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@831 -- # '[' -z 2806029 ']' 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:00.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:00.425 12:09:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:00.425 [2024-07-26 12:09:53.441346] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:09:00.425 [2024-07-26 12:09:53.441426] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:00.425 EAL: No free 2048 kB hugepages reported on node 1 00:09:00.425 [2024-07-26 12:09:53.510310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:00.425 [2024-07-26 12:09:53.632683] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:00.425 [2024-07-26 12:09:53.632748] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:00.425 [2024-07-26 12:09:53.632764] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:00.425 [2024-07-26 12:09:53.632786] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:00.425 [2024-07-26 12:09:53.632798] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:00.425 [2024-07-26 12:09:53.632874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:00.425 [2024-07-26 12:09:53.632934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:00.425 [2024-07-26 12:09:53.632989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:00.425 [2024-07-26 12:09:53.632986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:01.357 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:01.357 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # return 0 00:09:01.357 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:01.357 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:01.357 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:01.357 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:01.358 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:01.615 [2024-07-26 12:09:54.670692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:01.615 12:09:54 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:01.872 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:01.872 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.130 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:02.130 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.387 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:02.387 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:02.644 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:02.644 12:09:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:02.902 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.160 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:03.160 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.417 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:03.417 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:03.675 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:03.675 12:09:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:03.933 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:04.190 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.190 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:04.478 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:04.479 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:04.736 12:09:57 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:04.994 [2024-07-26 12:09:58.063448] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:04.994 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:05.265 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:05.530 12:09:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:06.096 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:06.096 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:09:06.096 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:06.096 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:09:06.096 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:09:06.096 12:09:59 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:09:08.630 12:10:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:08.630 [global] 00:09:08.630 thread=1 00:09:08.630 invalidate=1 00:09:08.630 rw=write 00:09:08.630 time_based=1 00:09:08.630 runtime=1 00:09:08.630 ioengine=libaio 00:09:08.630 direct=1 00:09:08.630 bs=4096 00:09:08.630 iodepth=1 00:09:08.630 norandommap=0 00:09:08.630 numjobs=1 00:09:08.630 00:09:08.630 verify_dump=1 00:09:08.630 verify_backlog=512 00:09:08.630 verify_state_save=0 00:09:08.630 do_verify=1 00:09:08.630 verify=crc32c-intel 00:09:08.630 [job0] 00:09:08.630 filename=/dev/nvme0n1 00:09:08.630 [job1] 00:09:08.630 filename=/dev/nvme0n2 00:09:08.630 [job2] 00:09:08.630 filename=/dev/nvme0n3 00:09:08.630 [job3] 00:09:08.630 filename=/dev/nvme0n4 00:09:08.630 Could not set queue depth (nvme0n1) 00:09:08.630 Could not set queue depth (nvme0n2) 00:09:08.630 Could not set queue depth (nvme0n3) 00:09:08.631 Could not set queue depth (nvme0n4) 00:09:08.631 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.631 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.631 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.631 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:08.631 fio-3.35 00:09:08.631 Starting 4 threads 00:09:09.568 00:09:09.568 job0: (groupid=0, jobs=1): err= 0: pid=2807117: Fri Jul 26 12:10:02 2024 00:09:09.568 read: IOPS=303, BW=1215KiB/s (1244kB/s)(1220KiB/1004msec) 00:09:09.568 slat (nsec): min=5231, max=34485, avg=15719.62, stdev=7750.45 00:09:09.568 clat (usec): min=297, max=42046, avg=2706.74, stdev=9392.44 00:09:09.568 lat (usec): min=303, max=42080, avg=2722.46, stdev=9395.19 00:09:09.568 clat percentiles (usec): 00:09:09.568 | 1.00th=[ 306], 5.00th=[ 334], 10.00th=[ 347], 20.00th=[ 367], 00:09:09.568 | 30.00th=[ 388], 40.00th=[ 412], 50.00th=[ 441], 60.00th=[ 465], 00:09:09.568 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 529], 95.00th=[41157], 00:09:09.568 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:09:09.568 | 99.99th=[42206] 00:09:09.568 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:09:09.568 slat (nsec): min=5592, max=63857, avg=12279.81, stdev=6721.56 00:09:09.568 clat (usec): min=174, max=998, avg=319.55, stdev=124.38 00:09:09.569 lat (usec): min=186, max=1014, avg=331.83, stdev=127.21 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[ 186], 5.00th=[ 200], 10.00th=[ 206], 20.00th=[ 219], 00:09:09.569 | 30.00th=[ 233], 40.00th=[ 262], 50.00th=[ 289], 60.00th=[ 326], 00:09:09.569 | 70.00th=[ 379], 80.00th=[ 383], 90.00th=[ 420], 95.00th=[ 515], 00:09:09.569 | 99.00th=[ 807], 99.50th=[ 963], 99.90th=[ 996], 99.95th=[ 996], 00:09:09.569 | 99.99th=[ 996] 00:09:09.569 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.569 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.569 lat (usec) : 250=23.01%, 500=66.59%, 750=6.85%, 1000=1.47% 00:09:09.569 lat (msec) : 50=2.08% 00:09:09.569 cpu : usr=0.50%, sys=1.20%, ctx=819, majf=0, minf=2 00:09:09.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 issued rwts: total=305,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.569 job1: (groupid=0, jobs=1): err= 0: pid=2807118: Fri Jul 26 12:10:02 2024 00:09:09.569 read: IOPS=21, BW=87.0KiB/s (89.1kB/s)(88.0KiB/1011msec) 00:09:09.569 slat (nsec): min=13436, max=34080, avg=25779.09, stdev=8608.39 00:09:09.569 clat (usec): min=40570, max=41042, avg=40951.58, stdev=89.40 00:09:09.569 lat (usec): min=40587, max=41075, avg=40977.36, stdev=90.65 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:09.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:09.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:09.569 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.569 | 99.99th=[41157] 00:09:09.569 write: IOPS=506, BW=2026KiB/s (2074kB/s)(2048KiB/1011msec); 0 zone resets 00:09:09.569 slat (usec): min=6, max=1207, avg=10.55, stdev=53.15 00:09:09.569 clat (usec): min=163, max=276, avg=194.44, stdev=13.11 00:09:09.569 lat (usec): min=170, max=1463, avg=204.99, stdev=57.48 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[ 172], 5.00th=[ 176], 10.00th=[ 182], 20.00th=[ 184], 00:09:09.569 | 30.00th=[ 188], 40.00th=[ 192], 50.00th=[ 192], 60.00th=[ 196], 00:09:09.569 | 70.00th=[ 200], 80.00th=[ 204], 90.00th=[ 210], 95.00th=[ 217], 00:09:09.569 | 99.00th=[ 231], 99.50th=[ 255], 99.90th=[ 277], 99.95th=[ 277], 00:09:09.569 | 99.99th=[ 277] 00:09:09.569 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.569 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.569 lat (usec) : 250=95.32%, 500=0.56% 00:09:09.569 lat (msec) : 50=4.12% 00:09:09.569 cpu : usr=0.00%, sys=0.69%, ctx=536, majf=0, minf=1 00:09:09.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.569 job2: (groupid=0, jobs=1): err= 0: pid=2807119: Fri Jul 26 12:10:02 2024 00:09:09.569 read: IOPS=771, BW=3087KiB/s (3161kB/s)(3192KiB/1034msec) 00:09:09.569 slat (nsec): min=5181, max=67228, avg=18736.79, stdev=9849.74 00:09:09.569 clat (usec): min=258, max=41373, avg=960.01, stdev=4945.21 00:09:09.569 lat (usec): min=269, max=41391, avg=978.75, stdev=4946.95 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[ 269], 5.00th=[ 281], 10.00th=[ 285], 20.00th=[ 302], 00:09:09.569 | 30.00th=[ 314], 40.00th=[ 330], 50.00th=[ 343], 60.00th=[ 351], 00:09:09.569 | 70.00th=[ 363], 80.00th=[ 388], 90.00th=[ 445], 95.00th=[ 502], 00:09:09.569 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.569 | 99.99th=[41157] 00:09:09.569 write: IOPS=990, BW=3961KiB/s (4056kB/s)(4096KiB/1034msec); 0 zone resets 00:09:09.569 slat (nsec): min=5642, max=55279, avg=13884.70, stdev=9030.17 00:09:09.569 clat (usec): min=177, max=745, avg=221.44, stdev=45.14 00:09:09.569 lat (usec): min=186, max=754, avg=235.32, stdev=46.36 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[ 182], 5.00th=[ 186], 10.00th=[ 188], 20.00th=[ 194], 00:09:09.569 | 30.00th=[ 196], 40.00th=[ 202], 50.00th=[ 206], 60.00th=[ 217], 00:09:09.569 | 70.00th=[ 231], 80.00th=[ 247], 90.00th=[ 265], 95.00th=[ 293], 00:09:09.569 | 99.00th=[ 375], 99.50th=[ 424], 99.90th=[ 676], 99.95th=[ 750], 00:09:09.569 | 99.99th=[ 750] 00:09:09.569 bw ( KiB/s): min= 8192, max= 8192, per=82.72%, avg=8192.00, stdev= 0.00, samples=1 00:09:09.569 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:09.569 lat (usec) : 250=46.05%, 500=51.59%, 750=1.70% 00:09:09.569 lat (msec) : 50=0.66% 00:09:09.569 cpu : usr=1.65%, sys=2.90%, ctx=1825, majf=0, minf=1 00:09:09.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 issued rwts: total=798,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.569 job3: (groupid=0, jobs=1): err= 0: pid=2807120: Fri Jul 26 12:10:02 2024 00:09:09.569 read: IOPS=19, BW=79.8KiB/s (81.7kB/s)(80.0KiB/1003msec) 00:09:09.569 slat (nsec): min=14487, max=45127, avg=27309.85, stdev=9995.97 00:09:09.569 clat (usec): min=40699, max=41051, avg=40956.11, stdev=74.53 00:09:09.569 lat (usec): min=40719, max=41087, avg=40983.42, stdev=75.25 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:09.569 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:09.569 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:09.569 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:09.569 | 99.99th=[41157] 00:09:09.569 write: IOPS=510, BW=2042KiB/s (2091kB/s)(2048KiB/1003msec); 0 zone resets 00:09:09.569 slat (usec): min=8, max=40737, avg=135.70, stdev=2053.66 00:09:09.569 clat (usec): min=182, max=275, avg=212.41, stdev=16.24 00:09:09.569 lat (usec): min=191, max=40982, avg=348.11, stdev=2056.36 00:09:09.569 clat percentiles (usec): 00:09:09.569 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 196], 20.00th=[ 198], 00:09:09.569 | 30.00th=[ 202], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 215], 00:09:09.569 | 70.00th=[ 219], 80.00th=[ 227], 90.00th=[ 233], 95.00th=[ 243], 00:09:09.569 | 99.00th=[ 269], 99.50th=[ 273], 99.90th=[ 277], 99.95th=[ 277], 00:09:09.569 | 99.99th=[ 277] 00:09:09.569 bw ( KiB/s): min= 4096, max= 4096, per=41.36%, avg=4096.00, stdev= 0.00, samples=1 00:09:09.569 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:09.569 lat (usec) : 250=93.61%, 500=2.63% 00:09:09.569 lat (msec) : 50=3.76% 00:09:09.569 cpu : usr=0.50%, sys=0.80%, ctx=535, majf=0, minf=1 00:09:09.569 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:09.569 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:09.569 issued rwts: total=20,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:09.569 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:09.569 00:09:09.569 Run status group 0 (all jobs): 00:09:09.569 READ: bw=4429KiB/s (4536kB/s), 79.8KiB/s-3087KiB/s (81.7kB/s-3161kB/s), io=4580KiB (4690kB), run=1003-1034msec 00:09:09.569 WRITE: bw=9903KiB/s (10.1MB/s), 2026KiB/s-3961KiB/s (2074kB/s-4056kB/s), io=10.0MiB (10.5MB), run=1003-1034msec 00:09:09.569 00:09:09.569 Disk stats (read/write): 00:09:09.569 nvme0n1: ios=351/512, merge=0/0, ticks=699/162, in_queue=861, util=87.07% 00:09:09.569 nvme0n2: ios=69/512, merge=0/0, ticks=874/101, in_queue=975, util=89.32% 00:09:09.569 nvme0n3: ios=850/1024, merge=0/0, ticks=650/216, in_queue=866, util=95.09% 00:09:09.569 nvme0n4: ios=49/512, merge=0/0, ticks=1565/110, in_queue=1675, util=96.21% 00:09:09.569 12:10:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:09.569 [global] 00:09:09.569 thread=1 00:09:09.569 invalidate=1 00:09:09.569 rw=randwrite 00:09:09.569 time_based=1 00:09:09.569 runtime=1 00:09:09.569 ioengine=libaio 00:09:09.569 direct=1 00:09:09.569 bs=4096 00:09:09.569 iodepth=1 00:09:09.569 norandommap=0 00:09:09.569 numjobs=1 00:09:09.569 00:09:09.569 verify_dump=1 00:09:09.569 verify_backlog=512 00:09:09.569 verify_state_save=0 00:09:09.569 do_verify=1 00:09:09.569 verify=crc32c-intel 00:09:09.569 [job0] 00:09:09.569 filename=/dev/nvme0n1 00:09:09.569 [job1] 00:09:09.569 filename=/dev/nvme0n2 00:09:09.569 [job2] 00:09:09.569 filename=/dev/nvme0n3 00:09:09.569 [job3] 00:09:09.569 filename=/dev/nvme0n4 00:09:09.569 Could not set queue depth (nvme0n1) 00:09:09.569 Could not set queue depth (nvme0n2) 00:09:09.569 Could not set queue depth (nvme0n3) 00:09:09.569 Could not set queue depth (nvme0n4) 00:09:09.827 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.827 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.827 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.827 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:09.827 fio-3.35 00:09:09.827 Starting 4 threads 00:09:11.206 00:09:11.206 job0: (groupid=0, jobs=1): err= 0: pid=2807354: Fri Jul 26 12:10:04 2024 00:09:11.206 read: IOPS=27, BW=109KiB/s (111kB/s)(112KiB/1030msec) 00:09:11.206 slat (nsec): min=12468, max=37812, avg=25269.61, stdev=9104.74 00:09:11.207 clat (usec): min=325, max=41030, avg=32216.35, stdev=16940.14 00:09:11.207 lat (usec): min=344, max=41048, avg=32241.61, stdev=16940.93 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 326], 5.00th=[ 343], 10.00th=[ 359], 20.00th=[ 392], 00:09:11.207 | 30.00th=[40633], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:11.207 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:11.207 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:11.207 | 99.99th=[41157] 00:09:11.207 write: IOPS=497, BW=1988KiB/s (2036kB/s)(2048KiB/1030msec); 0 zone resets 00:09:11.207 slat (nsec): min=9434, max=57279, avg=17103.52, stdev=6773.66 00:09:11.207 clat (usec): min=189, max=844, avg=225.83, stdev=48.12 00:09:11.207 lat (usec): min=201, max=855, avg=242.93, stdev=49.10 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 192], 5.00th=[ 198], 10.00th=[ 202], 20.00th=[ 206], 00:09:11.207 | 30.00th=[ 212], 40.00th=[ 219], 50.00th=[ 223], 60.00th=[ 229], 00:09:11.207 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 243], 95.00th=[ 251], 00:09:11.207 | 99.00th=[ 265], 99.50th=[ 791], 99.90th=[ 848], 99.95th=[ 848], 00:09:11.207 | 99.99th=[ 848] 00:09:11.207 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.207 lat (usec) : 250=89.63%, 500=5.74%, 1000=0.56% 00:09:11.207 lat (msec) : 50=4.07% 00:09:11.207 cpu : usr=1.07%, sys=0.68%, ctx=541, majf=0, minf=1 00:09:11.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 issued rwts: total=28,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.207 job1: (groupid=0, jobs=1): err= 0: pid=2807355: Fri Jul 26 12:10:04 2024 00:09:11.207 read: IOPS=829, BW=3317KiB/s (3396kB/s)(3320KiB/1001msec) 00:09:11.207 slat (nsec): min=6429, max=36285, avg=16897.55, stdev=8847.08 00:09:11.207 clat (usec): min=275, max=42029, avg=858.52, stdev=4503.17 00:09:11.207 lat (usec): min=291, max=42063, avg=875.42, stdev=4504.83 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 285], 5.00th=[ 293], 10.00th=[ 297], 20.00th=[ 306], 00:09:11.207 | 30.00th=[ 318], 40.00th=[ 322], 50.00th=[ 334], 60.00th=[ 367], 00:09:11.207 | 70.00th=[ 388], 80.00th=[ 420], 90.00th=[ 445], 95.00th=[ 461], 00:09:11.207 | 99.00th=[41157], 99.50th=[41681], 99.90th=[42206], 99.95th=[42206], 00:09:11.207 | 99.99th=[42206] 00:09:11.207 write: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec); 0 zone resets 00:09:11.207 slat (nsec): min=8145, max=57219, avg=17503.92, stdev=6457.30 00:09:11.207 clat (usec): min=173, max=842, avg=240.13, stdev=43.51 00:09:11.207 lat (usec): min=183, max=853, avg=257.63, stdev=45.34 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 182], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 208], 00:09:11.207 | 30.00th=[ 215], 40.00th=[ 223], 50.00th=[ 233], 60.00th=[ 243], 00:09:11.207 | 70.00th=[ 253], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 306], 00:09:11.207 | 99.00th=[ 396], 99.50th=[ 412], 99.90th=[ 457], 99.95th=[ 840], 00:09:11.207 | 99.99th=[ 840] 00:09:11.207 bw ( KiB/s): min= 6376, max= 6376, per=53.44%, avg=6376.00, stdev= 0.00, samples=1 00:09:11.207 iops : min= 1594, max= 1594, avg=1594.00, stdev= 0.00, samples=1 00:09:11.207 lat (usec) : 250=37.65%, 500=61.27%, 750=0.38%, 1000=0.11% 00:09:11.207 lat (msec) : 4=0.05%, 50=0.54% 00:09:11.207 cpu : usr=2.10%, sys=4.20%, ctx=1855, majf=0, minf=1 00:09:11.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 issued rwts: total=830,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.207 job2: (groupid=0, jobs=1): err= 0: pid=2807356: Fri Jul 26 12:10:04 2024 00:09:11.207 read: IOPS=20, BW=83.2KiB/s (85.2kB/s)(84.0KiB/1009msec) 00:09:11.207 slat (nsec): min=16323, max=39079, avg=32983.24, stdev=7768.54 00:09:11.207 clat (usec): min=40549, max=41044, avg=40934.56, stdev=101.96 00:09:11.207 lat (usec): min=40568, max=41081, avg=40967.54, stdev=104.46 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:09:11.207 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:11.207 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:11.207 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:11.207 | 99.99th=[41157] 00:09:11.207 write: IOPS=507, BW=2030KiB/s (2078kB/s)(2048KiB/1009msec); 0 zone resets 00:09:11.207 slat (nsec): min=10478, max=60649, avg=19876.21, stdev=7869.37 00:09:11.207 clat (usec): min=200, max=477, avg=264.49, stdev=40.75 00:09:11.207 lat (usec): min=214, max=519, avg=284.37, stdev=42.40 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 215], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:09:11.207 | 30.00th=[ 243], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 262], 00:09:11.207 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 302], 95.00th=[ 347], 00:09:11.207 | 99.00th=[ 433], 99.50th=[ 453], 99.90th=[ 478], 99.95th=[ 478], 00:09:11.207 | 99.99th=[ 478] 00:09:11.207 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=1 00:09:11.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:11.207 lat (usec) : 250=40.90%, 500=55.16% 00:09:11.207 lat (msec) : 50=3.94% 00:09:11.207 cpu : usr=0.99%, sys=1.09%, ctx=534, majf=0, minf=2 00:09:11.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.207 job3: (groupid=0, jobs=1): err= 0: pid=2807357: Fri Jul 26 12:10:04 2024 00:09:11.207 read: IOPS=789, BW=3158KiB/s (3234kB/s)(3212KiB/1017msec) 00:09:11.207 slat (nsec): min=10428, max=70367, avg=26076.21, stdev=10346.13 00:09:11.207 clat (usec): min=255, max=41314, avg=903.09, stdev=4507.74 00:09:11.207 lat (usec): min=267, max=41327, avg=929.16, stdev=4507.77 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 265], 5.00th=[ 273], 10.00th=[ 281], 20.00th=[ 318], 00:09:11.207 | 30.00th=[ 330], 40.00th=[ 359], 50.00th=[ 388], 60.00th=[ 404], 00:09:11.207 | 70.00th=[ 482], 80.00th=[ 498], 90.00th=[ 510], 95.00th=[ 545], 00:09:11.207 | 99.00th=[40633], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:11.207 | 99.99th=[41157] 00:09:11.207 write: IOPS=1006, BW=4028KiB/s (4124kB/s)(4096KiB/1017msec); 0 zone resets 00:09:11.207 slat (nsec): min=8224, max=59011, avg=18166.09, stdev=7632.06 00:09:11.207 clat (usec): min=174, max=487, avg=234.48, stdev=56.13 00:09:11.207 lat (usec): min=190, max=527, avg=252.65, stdev=58.28 00:09:11.207 clat percentiles (usec): 00:09:11.207 | 1.00th=[ 178], 5.00th=[ 182], 10.00th=[ 184], 20.00th=[ 188], 00:09:11.207 | 30.00th=[ 192], 40.00th=[ 196], 50.00th=[ 227], 60.00th=[ 243], 00:09:11.207 | 70.00th=[ 258], 80.00th=[ 273], 90.00th=[ 297], 95.00th=[ 343], 00:09:11.207 | 99.00th=[ 441], 99.50th=[ 465], 99.90th=[ 486], 99.95th=[ 490], 00:09:11.207 | 99.99th=[ 490] 00:09:11.207 bw ( KiB/s): min= 4096, max= 4096, per=34.33%, avg=4096.00, stdev= 0.00, samples=2 00:09:11.207 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=2 00:09:11.207 lat (usec) : 250=36.40%, 500=55.83%, 750=7.17%, 1000=0.05% 00:09:11.207 lat (msec) : 50=0.55% 00:09:11.207 cpu : usr=2.17%, sys=4.04%, ctx=1829, majf=0, minf=1 00:09:11.207 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:11.207 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:11.207 issued rwts: total=803,1024,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:11.207 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:11.207 00:09:11.207 Run status group 0 (all jobs): 00:09:11.207 READ: bw=6532KiB/s (6689kB/s), 83.2KiB/s-3317KiB/s (85.2kB/s-3396kB/s), io=6728KiB (6889kB), run=1001-1030msec 00:09:11.207 WRITE: bw=11.7MiB/s (12.2MB/s), 1988KiB/s-4092KiB/s (2036kB/s-4190kB/s), io=12.0MiB (12.6MB), run=1001-1030msec 00:09:11.207 00:09:11.207 Disk stats (read/write): 00:09:11.207 nvme0n1: ios=46/512, merge=0/0, ticks=1651/108, in_queue=1759, util=98.20% 00:09:11.207 nvme0n2: ios=533/1024, merge=0/0, ticks=534/230, in_queue=764, util=86.59% 00:09:11.207 nvme0n3: ios=51/512, merge=0/0, ticks=1688/133, in_queue=1821, util=97.91% 00:09:11.207 nvme0n4: ios=713/1024, merge=0/0, ticks=1493/225, in_queue=1718, util=97.99% 00:09:11.207 12:10:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:11.207 [global] 00:09:11.207 thread=1 00:09:11.207 invalidate=1 00:09:11.207 rw=write 00:09:11.207 time_based=1 00:09:11.207 runtime=1 00:09:11.207 ioengine=libaio 00:09:11.207 direct=1 00:09:11.207 bs=4096 00:09:11.207 iodepth=128 00:09:11.207 norandommap=0 00:09:11.207 numjobs=1 00:09:11.207 00:09:11.207 verify_dump=1 00:09:11.207 verify_backlog=512 00:09:11.207 verify_state_save=0 00:09:11.207 do_verify=1 00:09:11.207 verify=crc32c-intel 00:09:11.207 [job0] 00:09:11.207 filename=/dev/nvme0n1 00:09:11.207 [job1] 00:09:11.207 filename=/dev/nvme0n2 00:09:11.207 [job2] 00:09:11.207 filename=/dev/nvme0n3 00:09:11.207 [job3] 00:09:11.208 filename=/dev/nvme0n4 00:09:11.208 Could not set queue depth (nvme0n1) 00:09:11.208 Could not set queue depth (nvme0n2) 00:09:11.208 Could not set queue depth (nvme0n3) 00:09:11.208 Could not set queue depth (nvme0n4) 00:09:11.208 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.208 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.208 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.208 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:11.208 fio-3.35 00:09:11.208 Starting 4 threads 00:09:12.583 00:09:12.583 job0: (groupid=0, jobs=1): err= 0: pid=2807703: Fri Jul 26 12:10:05 2024 00:09:12.583 read: IOPS=2544, BW=9.94MiB/s (10.4MB/s)(10.0MiB/1006msec) 00:09:12.583 slat (usec): min=3, max=26677, avg=164.32, stdev=1200.23 00:09:12.583 clat (usec): min=5866, max=75040, avg=20803.99, stdev=12699.10 00:09:12.583 lat (usec): min=5879, max=75095, avg=20968.31, stdev=12798.04 00:09:12.583 clat percentiles (usec): 00:09:12.583 | 1.00th=[ 7898], 5.00th=[ 9503], 10.00th=[10421], 20.00th=[11207], 00:09:12.583 | 30.00th=[12125], 40.00th=[13566], 50.00th=[15139], 60.00th=[18220], 00:09:12.583 | 70.00th=[26346], 80.00th=[30278], 90.00th=[39584], 95.00th=[49021], 00:09:12.583 | 99.00th=[58459], 99.50th=[65799], 99.90th=[65799], 99.95th=[67634], 00:09:12.583 | 99.99th=[74974] 00:09:12.583 write: IOPS=2821, BW=11.0MiB/s (11.6MB/s)(11.1MiB/1006msec); 0 zone resets 00:09:12.583 slat (usec): min=4, max=27113, avg=194.79, stdev=1304.82 00:09:12.583 clat (usec): min=4924, max=78886, avg=25779.18, stdev=18728.50 00:09:12.583 lat (usec): min=5633, max=78910, avg=25973.97, stdev=18880.99 00:09:12.583 clat percentiles (usec): 00:09:12.583 | 1.00th=[ 6849], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10159], 00:09:12.583 | 30.00th=[11207], 40.00th=[13566], 50.00th=[16319], 60.00th=[23462], 00:09:12.583 | 70.00th=[31589], 80.00th=[44827], 90.00th=[54264], 95.00th=[62653], 00:09:12.583 | 99.00th=[74974], 99.50th=[76022], 99.90th=[79168], 99.95th=[79168], 00:09:12.583 | 99.99th=[79168] 00:09:12.583 bw ( KiB/s): min= 6584, max=15104, per=18.95%, avg=10844.00, stdev=6024.55, samples=2 00:09:12.583 iops : min= 1646, max= 3776, avg=2711.00, stdev=1506.14, samples=2 00:09:12.583 lat (msec) : 10=11.15%, 20=50.09%, 50=29.71%, 100=9.04% 00:09:12.583 cpu : usr=3.28%, sys=5.47%, ctx=226, majf=0, minf=17 00:09:12.583 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:09:12.583 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.583 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.583 issued rwts: total=2560,2838,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.583 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.583 job1: (groupid=0, jobs=1): err= 0: pid=2807704: Fri Jul 26 12:10:05 2024 00:09:12.583 read: IOPS=3559, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1007msec) 00:09:12.583 slat (usec): min=2, max=21862, avg=126.20, stdev=791.77 00:09:12.584 clat (usec): min=2650, max=47852, avg=16609.75, stdev=8245.91 00:09:12.584 lat (usec): min=2659, max=47861, avg=16735.95, stdev=8278.75 00:09:12.584 clat percentiles (usec): 00:09:12.584 | 1.00th=[ 4228], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[10290], 00:09:12.584 | 30.00th=[10683], 40.00th=[11469], 50.00th=[14091], 60.00th=[18220], 00:09:12.584 | 70.00th=[20317], 80.00th=[23725], 90.00th=[27395], 95.00th=[33817], 00:09:12.584 | 99.00th=[38536], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:09:12.584 | 99.99th=[47973] 00:09:12.584 write: IOPS=3859, BW=15.1MiB/s (15.8MB/s)(15.2MiB/1007msec); 0 zone resets 00:09:12.584 slat (usec): min=3, max=18698, avg=132.17, stdev=826.98 00:09:12.584 clat (usec): min=744, max=51050, avg=17332.06, stdev=9743.10 00:09:12.584 lat (usec): min=757, max=51062, avg=17464.23, stdev=9779.04 00:09:12.584 clat percentiles (usec): 00:09:12.584 | 1.00th=[ 4621], 5.00th=[ 7111], 10.00th=[ 8094], 20.00th=[ 8848], 00:09:12.584 | 30.00th=[10028], 40.00th=[11731], 50.00th=[16057], 60.00th=[17171], 00:09:12.584 | 70.00th=[20055], 80.00th=[25560], 90.00th=[31851], 95.00th=[36439], 00:09:12.584 | 99.00th=[47973], 99.50th=[50594], 99.90th=[51119], 99.95th=[51119], 00:09:12.584 | 99.99th=[51119] 00:09:12.584 bw ( KiB/s): min=13688, max=16384, per=26.28%, avg=15036.00, stdev=1906.36, samples=2 00:09:12.584 iops : min= 3422, max= 4096, avg=3759.00, stdev=476.59, samples=2 00:09:12.584 lat (usec) : 750=0.04%, 1000=0.01% 00:09:12.584 lat (msec) : 2=0.07%, 4=0.54%, 10=22.92%, 20=46.00%, 50=30.05% 00:09:12.584 lat (msec) : 100=0.37% 00:09:12.584 cpu : usr=2.58%, sys=7.16%, ctx=326, majf=0, minf=13 00:09:12.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:12.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.584 issued rwts: total=3584,3887,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.584 job2: (groupid=0, jobs=1): err= 0: pid=2807705: Fri Jul 26 12:10:05 2024 00:09:12.584 read: IOPS=3081, BW=12.0MiB/s (12.6MB/s)(12.1MiB/1005msec) 00:09:12.584 slat (usec): min=2, max=17622, avg=159.10, stdev=1090.80 00:09:12.584 clat (usec): min=3105, max=50478, avg=20943.45, stdev=9525.21 00:09:12.584 lat (usec): min=6004, max=50516, avg=21102.55, stdev=9585.47 00:09:12.584 clat percentiles (usec): 00:09:12.584 | 1.00th=[ 8029], 5.00th=[ 9765], 10.00th=[11076], 20.00th=[12649], 00:09:12.584 | 30.00th=[13698], 40.00th=[15401], 50.00th=[16712], 60.00th=[21627], 00:09:12.584 | 70.00th=[26084], 80.00th=[30278], 90.00th=[36439], 95.00th=[40109], 00:09:12.584 | 99.00th=[41681], 99.50th=[41681], 99.90th=[43779], 99.95th=[45351], 00:09:12.584 | 99.99th=[50594] 00:09:12.584 write: IOPS=3566, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1005msec); 0 zone resets 00:09:12.584 slat (usec): min=3, max=24996, avg=129.52, stdev=971.64 00:09:12.584 clat (usec): min=790, max=43353, avg=17204.06, stdev=7569.69 00:09:12.584 lat (usec): min=803, max=43383, avg=17333.59, stdev=7620.26 00:09:12.584 clat percentiles (usec): 00:09:12.584 | 1.00th=[ 3064], 5.00th=[ 8356], 10.00th=[ 9634], 20.00th=[11469], 00:09:12.584 | 30.00th=[11863], 40.00th=[12911], 50.00th=[15401], 60.00th=[17433], 00:09:12.584 | 70.00th=[20317], 80.00th=[22938], 90.00th=[27132], 95.00th=[33162], 00:09:12.584 | 99.00th=[38536], 99.50th=[39060], 99.90th=[40109], 99.95th=[43254], 00:09:12.584 | 99.99th=[43254] 00:09:12.584 bw ( KiB/s): min=12263, max=15560, per=24.31%, avg=13911.50, stdev=2331.33, samples=2 00:09:12.584 iops : min= 3065, max= 3890, avg=3477.50, stdev=583.36, samples=2 00:09:12.584 lat (usec) : 1000=0.03% 00:09:12.584 lat (msec) : 2=0.09%, 4=0.82%, 10=7.42%, 20=54.65%, 50=36.97% 00:09:12.584 lat (msec) : 100=0.01% 00:09:12.584 cpu : usr=2.39%, sys=5.08%, ctx=292, majf=0, minf=9 00:09:12.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:12.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.584 issued rwts: total=3097,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.584 job3: (groupid=0, jobs=1): err= 0: pid=2807706: Fri Jul 26 12:10:05 2024 00:09:12.584 read: IOPS=3834, BW=15.0MiB/s (15.7MB/s)(15.0MiB/1003msec) 00:09:12.584 slat (usec): min=2, max=19349, avg=130.27, stdev=782.85 00:09:12.584 clat (usec): min=594, max=54895, avg=15907.12, stdev=6420.30 00:09:12.584 lat (usec): min=3502, max=54899, avg=16037.40, stdev=6453.78 00:09:12.584 clat percentiles (usec): 00:09:12.584 | 1.00th=[ 6259], 5.00th=[ 9372], 10.00th=[10421], 20.00th=[11731], 00:09:12.584 | 30.00th=[12387], 40.00th=[12911], 50.00th=[13829], 60.00th=[15139], 00:09:12.584 | 70.00th=[17957], 80.00th=[19792], 90.00th=[23200], 95.00th=[27132], 00:09:12.584 | 99.00th=[37487], 99.50th=[44303], 99.90th=[54789], 99.95th=[54789], 00:09:12.584 | 99.99th=[54789] 00:09:12.584 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:09:12.584 slat (usec): min=3, max=18935, avg=114.42, stdev=659.99 00:09:12.584 clat (usec): min=6443, max=81324, avg=16000.23, stdev=10017.70 00:09:12.584 lat (usec): min=8001, max=81329, avg=16114.66, stdev=10057.49 00:09:12.584 clat percentiles (usec): 00:09:12.584 | 1.00th=[ 8356], 5.00th=[ 9372], 10.00th=[10028], 20.00th=[10683], 00:09:12.584 | 30.00th=[11469], 40.00th=[12518], 50.00th=[14091], 60.00th=[15795], 00:09:12.584 | 70.00th=[16712], 80.00th=[17957], 90.00th=[20317], 95.00th=[24249], 00:09:12.584 | 99.00th=[73925], 99.50th=[77071], 99.90th=[81265], 99.95th=[81265], 00:09:12.584 | 99.99th=[81265] 00:09:12.584 bw ( KiB/s): min=14552, max=18179, per=28.60%, avg=16365.50, stdev=2564.68, samples=2 00:09:12.584 iops : min= 3638, max= 4544, avg=4091.00, stdev=640.64, samples=2 00:09:12.584 lat (usec) : 750=0.01% 00:09:12.584 lat (msec) : 4=0.40%, 10=8.50%, 20=75.45%, 50=14.04%, 100=1.60% 00:09:12.584 cpu : usr=3.89%, sys=5.69%, ctx=472, majf=0, minf=11 00:09:12.584 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:12.584 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:12.584 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:12.584 issued rwts: total=3846,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:12.584 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:12.584 00:09:12.584 Run status group 0 (all jobs): 00:09:12.584 READ: bw=50.8MiB/s (53.2MB/s), 9.94MiB/s-15.0MiB/s (10.4MB/s-15.7MB/s), io=51.1MiB (53.6MB), run=1003-1007msec 00:09:12.584 WRITE: bw=55.9MiB/s (58.6MB/s), 11.0MiB/s-16.0MiB/s (11.6MB/s-16.7MB/s), io=56.3MiB (59.0MB), run=1003-1007msec 00:09:12.584 00:09:12.584 Disk stats (read/write): 00:09:12.584 nvme0n1: ios=1657/2048, merge=0/0, ticks=21849/30642, in_queue=52491, util=100.00% 00:09:12.584 nvme0n2: ios=3166/3584, merge=0/0, ticks=16836/17388, in_queue=34224, util=97.26% 00:09:12.584 nvme0n3: ios=2602/2714, merge=0/0, ticks=25883/17745, in_queue=43628, util=89.34% 00:09:12.584 nvme0n4: ios=3104/3584, merge=0/0, ticks=15063/13346, in_queue=28409, util=88.74% 00:09:12.584 12:10:05 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:12.584 [global] 00:09:12.584 thread=1 00:09:12.584 invalidate=1 00:09:12.584 rw=randwrite 00:09:12.584 time_based=1 00:09:12.584 runtime=1 00:09:12.584 ioengine=libaio 00:09:12.584 direct=1 00:09:12.584 bs=4096 00:09:12.584 iodepth=128 00:09:12.584 norandommap=0 00:09:12.584 numjobs=1 00:09:12.584 00:09:12.584 verify_dump=1 00:09:12.584 verify_backlog=512 00:09:12.584 verify_state_save=0 00:09:12.584 do_verify=1 00:09:12.584 verify=crc32c-intel 00:09:12.584 [job0] 00:09:12.584 filename=/dev/nvme0n1 00:09:12.584 [job1] 00:09:12.584 filename=/dev/nvme0n2 00:09:12.584 [job2] 00:09:12.584 filename=/dev/nvme0n3 00:09:12.584 [job3] 00:09:12.584 filename=/dev/nvme0n4 00:09:12.584 Could not set queue depth (nvme0n1) 00:09:12.584 Could not set queue depth (nvme0n2) 00:09:12.584 Could not set queue depth (nvme0n3) 00:09:12.584 Could not set queue depth (nvme0n4) 00:09:12.843 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.843 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.843 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.843 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:12.843 fio-3.35 00:09:12.843 Starting 4 threads 00:09:14.227 00:09:14.227 job0: (groupid=0, jobs=1): err= 0: pid=2807938: Fri Jul 26 12:10:07 2024 00:09:14.227 read: IOPS=2635, BW=10.3MiB/s (10.8MB/s)(10.4MiB/1014msec) 00:09:14.227 slat (usec): min=3, max=20263, avg=183.28, stdev=1308.23 00:09:14.227 clat (usec): min=984, max=67228, avg=22539.12, stdev=11771.54 00:09:14.227 lat (usec): min=1008, max=67236, avg=22722.39, stdev=11855.59 00:09:14.227 clat percentiles (usec): 00:09:14.227 | 1.00th=[ 1270], 5.00th=[11994], 10.00th=[12256], 20.00th=[12649], 00:09:14.227 | 30.00th=[13435], 40.00th=[20317], 50.00th=[20841], 60.00th=[21890], 00:09:14.227 | 70.00th=[24249], 80.00th=[27395], 90.00th=[36963], 95.00th=[49021], 00:09:14.227 | 99.00th=[63177], 99.50th=[64750], 99.90th=[67634], 99.95th=[67634], 00:09:14.227 | 99.99th=[67634] 00:09:14.227 write: IOPS=3029, BW=11.8MiB/s (12.4MB/s)(12.0MiB/1014msec); 0 zone resets 00:09:14.227 slat (usec): min=5, max=28843, avg=154.00, stdev=1104.04 00:09:14.227 clat (usec): min=3568, max=67231, avg=22206.91, stdev=9292.97 00:09:14.227 lat (usec): min=3578, max=67240, avg=22360.91, stdev=9347.75 00:09:14.227 clat percentiles (usec): 00:09:14.227 | 1.00th=[ 5342], 5.00th=[10814], 10.00th=[12911], 20.00th=[15664], 00:09:14.227 | 30.00th=[18482], 40.00th=[19792], 50.00th=[22152], 60.00th=[22414], 00:09:14.227 | 70.00th=[23200], 80.00th=[23725], 90.00th=[32375], 95.00th=[45876], 00:09:14.227 | 99.00th=[55313], 99.50th=[57410], 99.90th=[58983], 99.95th=[67634], 00:09:14.227 | 99.99th=[67634] 00:09:14.227 bw ( KiB/s): min=11896, max=12288, per=22.01%, avg=12092.00, stdev=277.19, samples=2 00:09:14.227 iops : min= 2974, max= 3072, avg=3023.00, stdev=69.30, samples=2 00:09:14.227 lat (usec) : 1000=0.07% 00:09:14.227 lat (msec) : 2=0.50%, 4=0.21%, 10=2.52%, 20=37.34%, 50=55.68% 00:09:14.227 lat (msec) : 100=3.67% 00:09:14.227 cpu : usr=4.34%, sys=5.53%, ctx=275, majf=0, minf=19 00:09:14.227 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.9% 00:09:14.227 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.227 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.227 issued rwts: total=2672,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.228 job1: (groupid=0, jobs=1): err= 0: pid=2807939: Fri Jul 26 12:10:07 2024 00:09:14.228 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:09:14.228 slat (usec): min=2, max=41156, avg=119.92, stdev=1123.12 00:09:14.228 clat (msec): min=3, max=116, avg=15.78, stdev=12.28 00:09:14.228 lat (msec): min=3, max=116, avg=15.90, stdev=12.33 00:09:14.228 clat percentiles (msec): 00:09:14.228 | 1.00th=[ 7], 5.00th=[ 11], 10.00th=[ 11], 20.00th=[ 12], 00:09:14.228 | 30.00th=[ 12], 40.00th=[ 12], 50.00th=[ 13], 60.00th=[ 14], 00:09:14.228 | 70.00th=[ 14], 80.00th=[ 15], 90.00th=[ 21], 95.00th=[ 43], 00:09:14.228 | 99.00th=[ 65], 99.50th=[ 102], 99.90th=[ 117], 99.95th=[ 117], 00:09:14.228 | 99.99th=[ 117] 00:09:14.228 write: IOPS=4143, BW=16.2MiB/s (17.0MB/s)(16.4MiB/1013msec); 0 zone resets 00:09:14.228 slat (usec): min=3, max=8325, avg=113.29, stdev=653.00 00:09:14.228 clat (usec): min=845, max=40813, avg=15113.78, stdev=6631.99 00:09:14.228 lat (usec): min=851, max=40826, avg=15227.07, stdev=6677.29 00:09:14.228 clat percentiles (usec): 00:09:14.228 | 1.00th=[ 5669], 5.00th=[ 6783], 10.00th=[ 7308], 20.00th=[10552], 00:09:14.228 | 30.00th=[11863], 40.00th=[12125], 50.00th=[12649], 60.00th=[13698], 00:09:14.228 | 70.00th=[17171], 80.00th=[21365], 90.00th=[24773], 95.00th=[27132], 00:09:14.228 | 99.00th=[35390], 99.50th=[36439], 99.90th=[40633], 99.95th=[40633], 00:09:14.228 | 99.99th=[40633] 00:09:14.228 bw ( KiB/s): min=15872, max=16952, per=29.88%, avg=16412.00, stdev=763.68, samples=2 00:09:14.228 iops : min= 3968, max= 4238, avg=4103.00, stdev=190.92, samples=2 00:09:14.228 lat (usec) : 1000=0.04% 00:09:14.228 lat (msec) : 2=0.08%, 4=0.19%, 10=9.83%, 20=73.77%, 50=14.04% 00:09:14.228 lat (msec) : 100=1.80%, 250=0.25% 00:09:14.228 cpu : usr=3.56%, sys=5.63%, ctx=301, majf=0, minf=7 00:09:14.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:14.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.228 issued rwts: total=4096,4197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.228 job2: (groupid=0, jobs=1): err= 0: pid=2807940: Fri Jul 26 12:10:07 2024 00:09:14.228 read: IOPS=2896, BW=11.3MiB/s (11.9MB/s)(11.4MiB/1009msec) 00:09:14.228 slat (usec): min=3, max=29274, avg=163.76, stdev=1256.41 00:09:14.228 clat (usec): min=4155, max=51759, avg=20788.87, stdev=6986.66 00:09:14.228 lat (usec): min=9198, max=51779, avg=20952.63, stdev=7068.43 00:09:14.228 clat percentiles (usec): 00:09:14.228 | 1.00th=[ 9372], 5.00th=[11207], 10.00th=[11207], 20.00th=[13829], 00:09:14.228 | 30.00th=[17695], 40.00th=[20055], 50.00th=[20841], 60.00th=[21627], 00:09:14.228 | 70.00th=[22152], 80.00th=[24773], 90.00th=[30278], 95.00th=[33162], 00:09:14.228 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[50594], 00:09:14.228 | 99.99th=[51643] 00:09:14.228 write: IOPS=3044, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1009msec); 0 zone resets 00:09:14.228 slat (usec): min=5, max=20210, avg=159.39, stdev=1030.28 00:09:14.228 clat (usec): min=2946, max=47296, avg=21823.27, stdev=6726.28 00:09:14.228 lat (usec): min=2954, max=47315, avg=21982.66, stdev=6799.03 00:09:14.228 clat percentiles (usec): 00:09:14.228 | 1.00th=[ 9765], 5.00th=[11207], 10.00th=[13435], 20.00th=[15926], 00:09:14.228 | 30.00th=[18744], 40.00th=[21890], 50.00th=[22414], 60.00th=[22676], 00:09:14.228 | 70.00th=[23462], 80.00th=[23987], 90.00th=[30540], 95.00th=[34866], 00:09:14.228 | 99.00th=[42730], 99.50th=[44303], 99.90th=[46924], 99.95th=[46924], 00:09:14.228 | 99.99th=[47449] 00:09:14.228 bw ( KiB/s): min=12288, max=12288, per=22.37%, avg=12288.00, stdev= 0.00, samples=2 00:09:14.228 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=2 00:09:14.228 lat (msec) : 4=0.20%, 10=1.87%, 20=33.48%, 50=64.42%, 100=0.03% 00:09:14.228 cpu : usr=4.56%, sys=6.65%, ctx=289, majf=0, minf=15 00:09:14.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.5%, >=64=98.9% 00:09:14.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.228 issued rwts: total=2923,3072,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.228 job3: (groupid=0, jobs=1): err= 0: pid=2807941: Fri Jul 26 12:10:07 2024 00:09:14.228 read: IOPS=3268, BW=12.8MiB/s (13.4MB/s)(12.8MiB/1004msec) 00:09:14.228 slat (usec): min=3, max=11050, avg=140.34, stdev=755.21 00:09:14.228 clat (usec): min=724, max=43617, avg=18085.27, stdev=6861.15 00:09:14.228 lat (usec): min=5435, max=45710, avg=18225.61, stdev=6915.60 00:09:14.228 clat percentiles (usec): 00:09:14.228 | 1.00th=[ 6128], 5.00th=[10945], 10.00th=[11469], 20.00th=[12387], 00:09:14.228 | 30.00th=[13173], 40.00th=[15139], 50.00th=[16450], 60.00th=[18220], 00:09:14.228 | 70.00th=[19268], 80.00th=[22676], 90.00th=[28443], 95.00th=[31589], 00:09:14.228 | 99.00th=[39060], 99.50th=[41681], 99.90th=[43779], 99.95th=[43779], 00:09:14.228 | 99.99th=[43779] 00:09:14.228 write: IOPS=3569, BW=13.9MiB/s (14.6MB/s)(14.0MiB/1004msec); 0 zone resets 00:09:14.228 slat (usec): min=5, max=9443, avg=139.41, stdev=693.34 00:09:14.228 clat (usec): min=8191, max=48924, avg=18698.10, stdev=8807.97 00:09:14.228 lat (usec): min=8208, max=48949, avg=18837.50, stdev=8875.39 00:09:14.228 clat percentiles (usec): 00:09:14.228 | 1.00th=[ 8717], 5.00th=[10814], 10.00th=[10945], 20.00th=[11994], 00:09:14.228 | 30.00th=[12780], 40.00th=[13829], 50.00th=[14353], 60.00th=[17433], 00:09:14.228 | 70.00th=[21103], 80.00th=[25822], 90.00th=[31851], 95.00th=[39060], 00:09:14.228 | 99.00th=[45876], 99.50th=[46924], 99.90th=[49021], 99.95th=[49021], 00:09:14.228 | 99.99th=[49021] 00:09:14.228 bw ( KiB/s): min=12288, max=16416, per=26.13%, avg=14352.00, stdev=2918.94, samples=2 00:09:14.228 iops : min= 3072, max= 4104, avg=3588.00, stdev=729.73, samples=2 00:09:14.228 lat (usec) : 750=0.01% 00:09:14.228 lat (msec) : 10=2.62%, 20=66.92%, 50=30.44% 00:09:14.228 cpu : usr=5.38%, sys=8.37%, ctx=327, majf=0, minf=9 00:09:14.228 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:14.228 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.228 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:14.228 issued rwts: total=3282,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.228 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:14.228 00:09:14.228 Run status group 0 (all jobs): 00:09:14.228 READ: bw=50.0MiB/s (52.4MB/s), 10.3MiB/s-15.8MiB/s (10.8MB/s-16.6MB/s), io=50.7MiB (53.1MB), run=1004-1014msec 00:09:14.228 WRITE: bw=53.6MiB/s (56.2MB/s), 11.8MiB/s-16.2MiB/s (12.4MB/s-17.0MB/s), io=54.4MiB (57.0MB), run=1004-1014msec 00:09:14.228 00:09:14.228 Disk stats (read/write): 00:09:14.228 nvme0n1: ios=2065/2560, merge=0/0, ticks=46058/51221, in_queue=97279, util=97.70% 00:09:14.228 nvme0n2: ios=3125/3584, merge=0/0, ticks=34327/29883, in_queue=64210, util=86.89% 00:09:14.228 nvme0n3: ios=2447/2560, merge=0/0, ticks=50738/52032, in_queue=102770, util=97.80% 00:09:14.228 nvme0n4: ios=2979/3072, merge=0/0, ticks=17258/16435, in_queue=33693, util=89.67% 00:09:14.228 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:14.228 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=2808077 00:09:14.228 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:14.228 12:10:07 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:14.228 [global] 00:09:14.228 thread=1 00:09:14.228 invalidate=1 00:09:14.228 rw=read 00:09:14.228 time_based=1 00:09:14.228 runtime=10 00:09:14.228 ioengine=libaio 00:09:14.228 direct=1 00:09:14.228 bs=4096 00:09:14.228 iodepth=1 00:09:14.228 norandommap=1 00:09:14.228 numjobs=1 00:09:14.228 00:09:14.228 [job0] 00:09:14.228 filename=/dev/nvme0n1 00:09:14.228 [job1] 00:09:14.228 filename=/dev/nvme0n2 00:09:14.228 [job2] 00:09:14.228 filename=/dev/nvme0n3 00:09:14.228 [job3] 00:09:14.228 filename=/dev/nvme0n4 00:09:14.228 Could not set queue depth (nvme0n1) 00:09:14.228 Could not set queue depth (nvme0n2) 00:09:14.228 Could not set queue depth (nvme0n3) 00:09:14.228 Could not set queue depth (nvme0n4) 00:09:14.228 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.228 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.228 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.228 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:14.228 fio-3.35 00:09:14.228 Starting 4 threads 00:09:17.513 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:17.513 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:17.513 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=14995456, buflen=4096 00:09:17.513 fio: pid=2808168, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:17.513 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.513 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:17.513 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=33062912, buflen=4096 00:09:17.513 fio: pid=2808167, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:17.771 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=6418432, buflen=4096 00:09:17.771 fio: pid=2808165, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:17.771 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:17.771 12:10:10 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:18.029 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=11665408, buflen=4096 00:09:18.029 fio: pid=2808166, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:09:18.029 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.029 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:18.287 00:09:18.287 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2808165: Fri Jul 26 12:10:11 2024 00:09:18.287 read: IOPS=451, BW=1804KiB/s (1847kB/s)(6268KiB/3475msec) 00:09:18.287 slat (usec): min=5, max=16146, avg=45.45, stdev=736.50 00:09:18.287 clat (usec): min=296, max=45257, avg=2155.44, stdev=8232.24 00:09:18.287 lat (usec): min=303, max=45264, avg=2200.92, stdev=8260.74 00:09:18.287 clat percentiles (usec): 00:09:18.287 | 1.00th=[ 322], 5.00th=[ 363], 10.00th=[ 371], 20.00th=[ 379], 00:09:18.287 | 30.00th=[ 392], 40.00th=[ 408], 50.00th=[ 416], 60.00th=[ 420], 00:09:18.287 | 70.00th=[ 424], 80.00th=[ 433], 90.00th=[ 478], 95.00th=[ 603], 00:09:18.287 | 99.00th=[41157], 99.50th=[41157], 99.90th=[42206], 99.95th=[45351], 00:09:18.287 | 99.99th=[45351] 00:09:18.287 bw ( KiB/s): min= 104, max= 6944, per=7.31%, avg=1258.67, stdev=2785.25, samples=6 00:09:18.287 iops : min= 26, max= 1736, avg=314.67, stdev=696.31, samples=6 00:09:18.287 lat (usec) : 500=92.54%, 750=2.68%, 1000=0.32% 00:09:18.287 lat (msec) : 2=0.06%, 10=0.06%, 50=4.27% 00:09:18.287 cpu : usr=0.09%, sys=0.75%, ctx=1574, majf=0, minf=1 00:09:18.287 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.287 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.287 complete : 0=0.1%, 4=99.9%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.287 issued rwts: total=1568,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.287 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.287 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2808166: Fri Jul 26 12:10:11 2024 00:09:18.287 read: IOPS=758, BW=3035KiB/s (3107kB/s)(11.1MiB/3754msec) 00:09:18.287 slat (usec): min=5, max=15908, avg=29.41, stdev=506.97 00:09:18.288 clat (usec): min=281, max=41314, avg=1278.40, stdev=6059.34 00:09:18.288 lat (usec): min=287, max=41333, avg=1307.82, stdev=6079.52 00:09:18.288 clat percentiles (usec): 00:09:18.288 | 1.00th=[ 293], 5.00th=[ 302], 10.00th=[ 306], 20.00th=[ 314], 00:09:18.288 | 30.00th=[ 322], 40.00th=[ 326], 50.00th=[ 338], 60.00th=[ 359], 00:09:18.288 | 70.00th=[ 371], 80.00th=[ 392], 90.00th=[ 420], 95.00th=[ 441], 00:09:18.288 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:18.288 | 99.99th=[41157] 00:09:18.288 bw ( KiB/s): min= 96, max=10363, per=15.21%, avg=2617.57, stdev=4381.60, samples=7 00:09:18.288 iops : min= 24, max= 2590, avg=654.29, stdev=1095.18, samples=7 00:09:18.288 lat (usec) : 500=96.53%, 750=0.95%, 1000=0.11% 00:09:18.288 lat (msec) : 2=0.07%, 4=0.04%, 50=2.28% 00:09:18.288 cpu : usr=0.35%, sys=0.99%, ctx=2855, majf=0, minf=1 00:09:18.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.288 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.288 issued rwts: total=2849,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.288 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2808167: Fri Jul 26 12:10:11 2024 00:09:18.288 read: IOPS=2514, BW=9.82MiB/s (10.3MB/s)(31.5MiB/3210msec) 00:09:18.288 slat (nsec): min=5227, max=78209, avg=12171.51, stdev=6219.83 00:09:18.288 clat (usec): min=285, max=41390, avg=379.46, stdev=788.17 00:09:18.288 lat (usec): min=293, max=41408, avg=391.63, stdev=788.65 00:09:18.288 clat percentiles (usec): 00:09:18.288 | 1.00th=[ 297], 5.00th=[ 306], 10.00th=[ 310], 20.00th=[ 318], 00:09:18.288 | 30.00th=[ 326], 40.00th=[ 334], 50.00th=[ 347], 60.00th=[ 359], 00:09:18.288 | 70.00th=[ 371], 80.00th=[ 416], 90.00th=[ 449], 95.00th=[ 478], 00:09:18.288 | 99.00th=[ 537], 99.50th=[ 553], 99.90th=[ 898], 99.95th=[ 996], 00:09:18.288 | 99.99th=[41157] 00:09:18.288 bw ( KiB/s): min= 9152, max=10968, per=59.10%, avg=10168.00, stdev=723.87, samples=6 00:09:18.288 iops : min= 2288, max= 2742, avg=2542.00, stdev=180.97, samples=6 00:09:18.288 lat (usec) : 500=97.05%, 750=2.77%, 1000=0.11% 00:09:18.288 lat (msec) : 2=0.01%, 50=0.04% 00:09:18.288 cpu : usr=1.87%, sys=4.77%, ctx=8073, majf=0, minf=1 00:09:18.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.288 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.288 issued rwts: total=8073,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.288 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=2808168: Fri Jul 26 12:10:11 2024 00:09:18.288 read: IOPS=1247, BW=4988KiB/s (5107kB/s)(14.3MiB/2936msec) 00:09:18.288 slat (nsec): min=5312, max=65457, avg=19450.47, stdev=9786.47 00:09:18.288 clat (usec): min=319, max=42305, avg=771.63, stdev=3874.31 00:09:18.288 lat (usec): min=324, max=42339, avg=791.07, stdev=3874.57 00:09:18.288 clat percentiles (usec): 00:09:18.288 | 1.00th=[ 330], 5.00th=[ 343], 10.00th=[ 351], 20.00th=[ 363], 00:09:18.288 | 30.00th=[ 375], 40.00th=[ 388], 50.00th=[ 400], 60.00th=[ 412], 00:09:18.288 | 70.00th=[ 429], 80.00th=[ 441], 90.00th=[ 453], 95.00th=[ 478], 00:09:18.288 | 99.00th=[ 537], 99.50th=[41157], 99.90th=[42206], 99.95th=[42206], 00:09:18.288 | 99.99th=[42206] 00:09:18.288 bw ( KiB/s): min= 112, max= 9528, per=33.94%, avg=5840.00, stdev=3815.62, samples=5 00:09:18.288 iops : min= 28, max= 2382, avg=1460.00, stdev=953.90, samples=5 00:09:18.288 lat (usec) : 500=97.79%, 750=1.28% 00:09:18.288 lat (msec) : 50=0.90% 00:09:18.288 cpu : usr=1.12%, sys=3.20%, ctx=3662, majf=0, minf=1 00:09:18.288 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:18.288 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.288 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.288 issued rwts: total=3662,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.288 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:18.288 00:09:18.288 Run status group 0 (all jobs): 00:09:18.288 READ: bw=16.8MiB/s (17.6MB/s), 1804KiB/s-9.82MiB/s (1847kB/s-10.3MB/s), io=63.1MiB (66.1MB), run=2936-3754msec 00:09:18.288 00:09:18.288 Disk stats (read/write): 00:09:18.288 nvme0n1: ios=1388/0, merge=0/0, ticks=3301/0, in_queue=3301, util=95.22% 00:09:18.288 nvme0n2: ios=2547/0, merge=0/0, ticks=3512/0, in_queue=3512, util=95.36% 00:09:18.288 nvme0n3: ios=7857/0, merge=0/0, ticks=2925/0, in_queue=2925, util=96.79% 00:09:18.288 nvme0n4: ios=3659/0, merge=0/0, ticks=2668/0, in_queue=2668, util=96.75% 00:09:18.288 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.288 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:18.546 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.546 12:10:11 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:18.804 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:18.804 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:19.062 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:19.062 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:19.320 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:19.320 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 2808077 00:09:19.320 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:19.320 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:19.578 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:19.578 nvmf hotplug test: fio failed as expected 00:09:19.578 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:19.838 12:10:12 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:19.838 rmmod nvme_tcp 00:09:19.838 rmmod nvme_fabrics 00:09:19.838 rmmod nvme_keyring 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 2806029 ']' 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 2806029 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@950 -- # '[' -z 2806029 ']' 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # kill -0 2806029 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # uname 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2806029 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2806029' 00:09:19.838 killing process with pid 2806029 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@969 -- # kill 2806029 00:09:19.838 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@974 -- # wait 2806029 00:09:20.099 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:20.099 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:20.100 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:20.100 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:20.100 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:20.100 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:20.100 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:20.100 12:10:13 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:22.646 00:09:22.646 real 0m24.150s 00:09:22.646 user 1m24.046s 00:09:22.646 sys 0m6.801s 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:22.646 ************************************ 00:09:22.646 END TEST nvmf_fio_target 00:09:22.646 ************************************ 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:22.646 ************************************ 00:09:22.646 START TEST nvmf_bdevio 00:09:22.646 ************************************ 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:22.646 * Looking for test storage... 00:09:22.646 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:09:22.646 12:10:15 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:24.557 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:24.557 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:24.557 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:24.557 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.557 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.558 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.558 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.255 ms 00:09:24.558 00:09:24.558 --- 10.0.0.2 ping statistics --- 00:09:24.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.558 rtt min/avg/max/mdev = 0.255/0.255/0.255/0.000 ms 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.558 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.558 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.167 ms 00:09:24.558 00:09:24.558 --- 10.0.0.1 ping statistics --- 00:09:24.558 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.558 rtt min/avg/max/mdev = 0.167/0.167/0.167/0.000 ms 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=2810796 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 2810796 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@831 -- # '[' -z 2810796 ']' 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.558 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:24.558 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.558 [2024-07-26 12:10:17.633783] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:09:24.558 [2024-07-26 12:10:17.633885] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.558 EAL: No free 2048 kB hugepages reported on node 1 00:09:24.558 [2024-07-26 12:10:17.703453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:24.816 [2024-07-26 12:10:17.815931] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:24.816 [2024-07-26 12:10:17.815981] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:24.816 [2024-07-26 12:10:17.816010] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:24.816 [2024-07-26 12:10:17.816021] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:24.816 [2024-07-26 12:10:17.816031] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:24.816 [2024-07-26 12:10:17.816101] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:24.816 [2024-07-26 12:10:17.816148] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:24.816 [2024-07-26 12:10:17.816200] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:24.816 [2024-07-26 12:10:17.816203] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # return 0 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 [2024-07-26 12:10:17.965594] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 Malloc0 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.816 12:10:17 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:24.816 [2024-07-26 12:10:18.018751] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:24.816 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:24.817 { 00:09:24.817 "params": { 00:09:24.817 "name": "Nvme$subsystem", 00:09:24.817 "trtype": "$TEST_TRANSPORT", 00:09:24.817 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:24.817 "adrfam": "ipv4", 00:09:24.817 "trsvcid": "$NVMF_PORT", 00:09:24.817 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:24.817 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:24.817 "hdgst": ${hdgst:-false}, 00:09:24.817 "ddgst": ${ddgst:-false} 00:09:24.817 }, 00:09:24.817 "method": "bdev_nvme_attach_controller" 00:09:24.817 } 00:09:24.817 EOF 00:09:24.817 )") 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:09:24.817 12:10:18 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:24.817 "params": { 00:09:24.817 "name": "Nvme1", 00:09:24.817 "trtype": "tcp", 00:09:24.817 "traddr": "10.0.0.2", 00:09:24.817 "adrfam": "ipv4", 00:09:24.817 "trsvcid": "4420", 00:09:24.817 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:24.817 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:24.817 "hdgst": false, 00:09:24.817 "ddgst": false 00:09:24.817 }, 00:09:24.817 "method": "bdev_nvme_attach_controller" 00:09:24.817 }' 00:09:24.817 [2024-07-26 12:10:18.065460] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:09:24.817 [2024-07-26 12:10:18.065537] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2810947 ] 00:09:25.076 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.076 [2024-07-26 12:10:18.127686] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:25.076 [2024-07-26 12:10:18.243479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.076 [2024-07-26 12:10:18.243531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:25.076 [2024-07-26 12:10:18.243534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:25.643 I/O targets: 00:09:25.643 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:25.643 00:09:25.643 00:09:25.643 CUnit - A unit testing framework for C - Version 2.1-3 00:09:25.643 http://cunit.sourceforge.net/ 00:09:25.643 00:09:25.643 00:09:25.643 Suite: bdevio tests on: Nvme1n1 00:09:25.643 Test: blockdev write read block ...passed 00:09:25.643 Test: blockdev write zeroes read block ...passed 00:09:25.643 Test: blockdev write zeroes read no split ...passed 00:09:25.643 Test: blockdev write zeroes read split ...passed 00:09:25.643 Test: blockdev write zeroes read split partial ...passed 00:09:25.643 Test: blockdev reset ...[2024-07-26 12:10:18.795956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:09:25.643 [2024-07-26 12:10:18.796078] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x5aa580 (9): Bad file descriptor 00:09:25.643 [2024-07-26 12:10:18.891451] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:09:25.643 passed 00:09:25.902 Test: blockdev write read 8 blocks ...passed 00:09:25.902 Test: blockdev write read size > 128k ...passed 00:09:25.902 Test: blockdev write read invalid size ...passed 00:09:25.902 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:25.902 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:25.902 Test: blockdev write read max offset ...passed 00:09:25.902 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:25.902 Test: blockdev writev readv 8 blocks ...passed 00:09:25.902 Test: blockdev writev readv 30 x 1block ...passed 00:09:26.162 Test: blockdev writev readv block ...passed 00:09:26.162 Test: blockdev writev readv size > 128k ...passed 00:09:26.162 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:26.162 Test: blockdev comparev and writev ...[2024-07-26 12:10:19.232245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.232282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.232307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.232325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.232678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.232701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.232723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.232739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.233096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.233121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.233143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.233159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.233507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.233530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.233561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:26.162 [2024-07-26 12:10:19.233578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:26.162 passed 00:09:26.162 Test: blockdev nvme passthru rw ...passed 00:09:26.162 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:10:19.316353] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:26.162 [2024-07-26 12:10:19.316418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.316617] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:26.162 [2024-07-26 12:10:19.316642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.316826] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:26.162 [2024-07-26 12:10:19.316849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:26.162 [2024-07-26 12:10:19.317027] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:26.162 [2024-07-26 12:10:19.317049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:26.162 passed 00:09:26.162 Test: blockdev nvme admin passthru ...passed 00:09:26.162 Test: blockdev copy ...passed 00:09:26.162 00:09:26.162 Run Summary: Type Total Ran Passed Failed Inactive 00:09:26.162 suites 1 1 n/a 0 0 00:09:26.162 tests 23 23 23 0 0 00:09:26.162 asserts 152 152 152 0 n/a 00:09:26.162 00:09:26.162 Elapsed time = 1.557 seconds 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:26.421 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:26.421 rmmod nvme_tcp 00:09:26.421 rmmod nvme_fabrics 00:09:26.421 rmmod nvme_keyring 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 2810796 ']' 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 2810796 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@950 -- # '[' -z 2810796 ']' 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # kill -0 2810796 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # uname 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2810796 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2810796' 00:09:26.680 killing process with pid 2810796 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@969 -- # kill 2810796 00:09:26.680 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@974 -- # wait 2810796 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:26.940 12:10:19 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:28.851 00:09:28.851 real 0m6.596s 00:09:28.851 user 0m12.229s 00:09:28.851 sys 0m2.051s 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:28.851 ************************************ 00:09:28.851 END TEST nvmf_bdevio 00:09:28.851 ************************************ 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:09:28.851 00:09:28.851 real 3m56.843s 00:09:28.851 user 10m18.506s 00:09:28.851 sys 1m6.278s 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:28.851 ************************************ 00:09:28.851 END TEST nvmf_target_core 00:09:28.851 ************************************ 00:09:28.851 12:10:22 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:28.851 12:10:22 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:28.851 12:10:22 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:28.851 12:10:22 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:09:28.851 ************************************ 00:09:28.851 START TEST nvmf_target_extra 00:09:28.851 ************************************ 00:09:28.851 12:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:09:29.110 * Looking for test storage... 00:09:29.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:29.110 ************************************ 00:09:29.110 START TEST nvmf_example 00:09:29.110 ************************************ 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:09:29.110 * Looking for test storage... 00:09:29.110 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:29.110 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:09:29.111 12:10:22 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:31.016 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:31.017 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:31.017 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:31.017 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:31.017 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:31.017 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:31.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:31.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:09:31.276 00:09:31.276 --- 10.0.0.2 ping statistics --- 00:09:31.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.276 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:31.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:31.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.120 ms 00:09:31.276 00:09:31.276 --- 10.0.0.1 ping statistics --- 00:09:31.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:31.276 rtt min/avg/max/mdev = 0.120/0.120/0.120/0.000 ms 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=2813072 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 2813072 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@831 -- # '[' -z 2813072 ']' 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:31.276 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:31.276 12:10:24 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:31.276 EAL: No free 2048 kB hugepages reported on node 1 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # return 0 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:09:32.211 12:10:25 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:09:32.468 EAL: No free 2048 kB hugepages reported on node 1 00:09:42.455 Initializing NVMe Controllers 00:09:42.455 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:09:42.455 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:09:42.455 Initialization complete. Launching workers. 00:09:42.455 ======================================================== 00:09:42.455 Latency(us) 00:09:42.455 Device Information : IOPS MiB/s Average min max 00:09:42.455 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 15089.20 58.94 4242.66 885.56 19407.31 00:09:42.455 ======================================================== 00:09:42.455 Total : 15089.20 58.94 4242.66 885.56 19407.31 00:09:42.455 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:42.456 rmmod nvme_tcp 00:09:42.456 rmmod nvme_fabrics 00:09:42.456 rmmod nvme_keyring 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 2813072 ']' 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 2813072 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@950 -- # '[' -z 2813072 ']' 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # kill -0 2813072 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # uname 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:42.456 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2813072 00:09:42.714 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@956 -- # process_name=nvmf 00:09:42.714 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # '[' nvmf = sudo ']' 00:09:42.714 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2813072' 00:09:42.714 killing process with pid 2813072 00:09:42.714 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@969 -- # kill 2813072 00:09:42.714 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@974 -- # wait 2813072 00:09:42.714 nvmf threads initialize successfully 00:09:42.714 bdev subsystem init successfully 00:09:42.714 created a nvmf target service 00:09:42.714 create targets's poll groups done 00:09:42.714 all subsystems of target started 00:09:42.714 nvmf target is running 00:09:42.714 all subsystems of target stopped 00:09:42.714 destroy targets's poll groups done 00:09:42.714 destroyed the nvmf target service 00:09:42.714 bdev subsystem finish successfully 00:09:42.714 nvmf threads destroy successfully 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:42.974 12:10:35 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.880 00:09:44.880 real 0m15.844s 00:09:44.880 user 0m44.931s 00:09:44.880 sys 0m3.263s 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:09:44.880 ************************************ 00:09:44.880 END TEST nvmf_example 00:09:44.880 ************************************ 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:09:44.880 ************************************ 00:09:44.880 START TEST nvmf_filesystem 00:09:44.880 ************************************ 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:09:44.880 * Looking for test storage... 00:09:44.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:09:44.880 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:09:45.142 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:09:45.143 #define SPDK_CONFIG_H 00:09:45.143 #define SPDK_CONFIG_APPS 1 00:09:45.143 #define SPDK_CONFIG_ARCH native 00:09:45.143 #undef SPDK_CONFIG_ASAN 00:09:45.143 #undef SPDK_CONFIG_AVAHI 00:09:45.143 #undef SPDK_CONFIG_CET 00:09:45.143 #define SPDK_CONFIG_COVERAGE 1 00:09:45.143 #define SPDK_CONFIG_CROSS_PREFIX 00:09:45.143 #undef SPDK_CONFIG_CRYPTO 00:09:45.143 #undef SPDK_CONFIG_CRYPTO_MLX5 00:09:45.143 #undef SPDK_CONFIG_CUSTOMOCF 00:09:45.143 #undef SPDK_CONFIG_DAOS 00:09:45.143 #define SPDK_CONFIG_DAOS_DIR 00:09:45.143 #define SPDK_CONFIG_DEBUG 1 00:09:45.143 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:09:45.143 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:09:45.143 #define SPDK_CONFIG_DPDK_INC_DIR 00:09:45.143 #define SPDK_CONFIG_DPDK_LIB_DIR 00:09:45.143 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:09:45.143 #undef SPDK_CONFIG_DPDK_UADK 00:09:45.143 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:09:45.143 #define SPDK_CONFIG_EXAMPLES 1 00:09:45.143 #undef SPDK_CONFIG_FC 00:09:45.143 #define SPDK_CONFIG_FC_PATH 00:09:45.143 #define SPDK_CONFIG_FIO_PLUGIN 1 00:09:45.143 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:09:45.143 #undef SPDK_CONFIG_FUSE 00:09:45.143 #undef SPDK_CONFIG_FUZZER 00:09:45.143 #define SPDK_CONFIG_FUZZER_LIB 00:09:45.143 #undef SPDK_CONFIG_GOLANG 00:09:45.143 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:09:45.143 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:09:45.143 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:09:45.143 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:09:45.143 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:09:45.143 #undef SPDK_CONFIG_HAVE_LIBBSD 00:09:45.143 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:09:45.143 #define SPDK_CONFIG_IDXD 1 00:09:45.143 #define SPDK_CONFIG_IDXD_KERNEL 1 00:09:45.143 #undef SPDK_CONFIG_IPSEC_MB 00:09:45.143 #define SPDK_CONFIG_IPSEC_MB_DIR 00:09:45.143 #define SPDK_CONFIG_ISAL 1 00:09:45.143 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:09:45.143 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:09:45.143 #define SPDK_CONFIG_LIBDIR 00:09:45.143 #undef SPDK_CONFIG_LTO 00:09:45.143 #define SPDK_CONFIG_MAX_LCORES 128 00:09:45.143 #define SPDK_CONFIG_NVME_CUSE 1 00:09:45.143 #undef SPDK_CONFIG_OCF 00:09:45.143 #define SPDK_CONFIG_OCF_PATH 00:09:45.143 #define SPDK_CONFIG_OPENSSL_PATH 00:09:45.143 #undef SPDK_CONFIG_PGO_CAPTURE 00:09:45.143 #define SPDK_CONFIG_PGO_DIR 00:09:45.143 #undef SPDK_CONFIG_PGO_USE 00:09:45.143 #define SPDK_CONFIG_PREFIX /usr/local 00:09:45.143 #undef SPDK_CONFIG_RAID5F 00:09:45.143 #undef SPDK_CONFIG_RBD 00:09:45.143 #define SPDK_CONFIG_RDMA 1 00:09:45.143 #define SPDK_CONFIG_RDMA_PROV verbs 00:09:45.143 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:09:45.143 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:09:45.143 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:09:45.143 #define SPDK_CONFIG_SHARED 1 00:09:45.143 #undef SPDK_CONFIG_SMA 00:09:45.143 #define SPDK_CONFIG_TESTS 1 00:09:45.143 #undef SPDK_CONFIG_TSAN 00:09:45.143 #define SPDK_CONFIG_UBLK 1 00:09:45.143 #define SPDK_CONFIG_UBSAN 1 00:09:45.143 #undef SPDK_CONFIG_UNIT_TESTS 00:09:45.143 #undef SPDK_CONFIG_URING 00:09:45.143 #define SPDK_CONFIG_URING_PATH 00:09:45.143 #undef SPDK_CONFIG_URING_ZNS 00:09:45.143 #undef SPDK_CONFIG_USDT 00:09:45.143 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:09:45.143 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:09:45.143 #define SPDK_CONFIG_VFIO_USER 1 00:09:45.143 #define SPDK_CONFIG_VFIO_USER_DIR 00:09:45.143 #define SPDK_CONFIG_VHOST 1 00:09:45.143 #define SPDK_CONFIG_VIRTIO 1 00:09:45.143 #undef SPDK_CONFIG_VTUNE 00:09:45.143 #define SPDK_CONFIG_VTUNE_DIR 00:09:45.143 #define SPDK_CONFIG_WERROR 1 00:09:45.143 #define SPDK_CONFIG_WPDK_DIR 00:09:45.143 #undef SPDK_CONFIG_XNVME 00:09:45.143 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.143 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:09:45.144 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@179 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@180 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONDONTWRITEBYTECODE=1 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONDONTWRITEBYTECODE=1 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@196 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@201 -- # rm -rf /var/tmp/asan_suppression_file 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@202 -- # cat 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # echo leak:libfuse3.so 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # '[' -z /var/spdk/dependencies ']' 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@247 -- # export DEPENDENCY_DIR 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.145 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@252 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@258 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@261 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@264 -- # '[' 0 -eq 0 ']' 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export valgrind= 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # valgrind= 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # uname -s 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # '[' Linux = Linux ']' 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # HUGEMEM=4096 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # export CLEAR_HUGE=yes 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # CLEAR_HUGE=yes 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@274 -- # [[ 0 -eq 1 ]] 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@281 -- # MAKE=make 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@282 -- # MAKEFLAGS=-j48 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # export HUGEMEM=4096 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # HUGEMEM=4096 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # NO_HUGE=() 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # TEST_MODE= 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@302 -- # for i in "$@" 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@303 -- # case "$i" in 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # TEST_TRANSPORT=tcp 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # [[ -z 2814773 ]] 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@320 -- # kill -0 2814773 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # [[ -v testdir ]] 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@332 -- # local requested_size=2147483648 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local mount target_dir 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@335 -- # local -A mounts fss sizes avails uses 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local source fs size avail mount use 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # local storage_fallback storage_candidates 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # mktemp -udt spdk.XXXXXX 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # storage_fallback=/tmp/spdk.LUS0jv 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # [[ -n '' ]] 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@352 -- # [[ -n '' ]] 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@357 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.LUS0jv/tests/target /tmp/spdk.LUS0jv 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # requested_size=2214592512 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # df -T 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@329 -- # grep -v Filesystem 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_devtmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=devtmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=67108864 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=67108864 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=0 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=/dev/pmem0 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=ext2 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=953643008 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=5284429824 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4330786816 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=spdk_root 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=overlay 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=55592456192 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=61994713088 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=6402256896 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30935175168 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=62181376 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=12376535040 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=12398944256 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=22409216 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=30996451328 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=30997356544 00:09:45.146 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=905216 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # mounts["$mount"]=tmpfs 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # fss["$mount"]=tmpfs 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # avails["$mount"]=6199463936 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@364 -- # sizes["$mount"]=6199468032 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@365 -- # uses["$mount"]=4096 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # read -r source fs size use avail _ mount 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # printf '* Looking for test storage...\n' 00:09:45.147 * Looking for test storage... 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@370 -- # local target_space new_size 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # for target_dir in "${storage_candidates[@]}" 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # awk '$1 !~ /Filesystem/{print $6}' 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mount=/ 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # target_space=55592456192 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@377 -- # (( target_space == 0 || target_space < requested_size )) 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # (( target_space >= requested_size )) 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == tmpfs ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ overlay == ramfs ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # [[ / == / ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@383 -- # new_size=8616849408 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@384 -- # (( new_size * 100 / sizes[/] > 95 )) 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@390 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.147 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # return 0 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.147 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:09:45.148 12:10:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:47.122 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:09:47.123 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:09:47.123 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:09:47.123 Found net devices under 0000:0a:00.0: cvl_0_0 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:09:47.123 Found net devices under 0000:0a:00.1: cvl_0_1 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:47.123 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:47.123 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.249 ms 00:09:47.123 00:09:47.123 --- 10.0.0.2 ping statistics --- 00:09:47.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.123 rtt min/avg/max/mdev = 0.249/0.249/0.249/0.000 ms 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:47.123 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:47.123 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:09:47.123 00:09:47.123 --- 10.0.0.1 ping statistics --- 00:09:47.123 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:47.123 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:47.123 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:47.383 ************************************ 00:09:47.383 START TEST nvmf_filesystem_no_in_capsule 00:09:47.383 ************************************ 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 0 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2816395 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2816395 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2816395 ']' 00:09:47.383 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:47.384 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:47.384 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:47.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:47.384 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:47.384 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.384 [2024-07-26 12:10:40.455957] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:09:47.384 [2024-07-26 12:10:40.456032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:47.384 EAL: No free 2048 kB hugepages reported on node 1 00:09:47.384 [2024-07-26 12:10:40.519781] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:47.384 [2024-07-26 12:10:40.630764] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:47.384 [2024-07-26 12:10:40.630823] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:47.384 [2024-07-26 12:10:40.630851] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:47.384 [2024-07-26 12:10:40.630863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:47.384 [2024-07-26 12:10:40.630873] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:47.384 [2024-07-26 12:10:40.630967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:47.384 [2024-07-26 12:10:40.631039] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:47.384 [2024-07-26 12:10:40.631087] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:47.384 [2024-07-26 12:10:40.631090] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.642 [2024-07-26 12:10:40.779518] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.642 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.902 Malloc1 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.902 [2024-07-26 12:10:40.948941] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:47.902 { 00:09:47.902 "name": "Malloc1", 00:09:47.902 "aliases": [ 00:09:47.902 "52a63c79-2dc3-4a8e-82b3-5146865964d5" 00:09:47.902 ], 00:09:47.902 "product_name": "Malloc disk", 00:09:47.902 "block_size": 512, 00:09:47.902 "num_blocks": 1048576, 00:09:47.902 "uuid": "52a63c79-2dc3-4a8e-82b3-5146865964d5", 00:09:47.902 "assigned_rate_limits": { 00:09:47.902 "rw_ios_per_sec": 0, 00:09:47.902 "rw_mbytes_per_sec": 0, 00:09:47.902 "r_mbytes_per_sec": 0, 00:09:47.902 "w_mbytes_per_sec": 0 00:09:47.902 }, 00:09:47.902 "claimed": true, 00:09:47.902 "claim_type": "exclusive_write", 00:09:47.902 "zoned": false, 00:09:47.902 "supported_io_types": { 00:09:47.902 "read": true, 00:09:47.902 "write": true, 00:09:47.902 "unmap": true, 00:09:47.902 "flush": true, 00:09:47.902 "reset": true, 00:09:47.902 "nvme_admin": false, 00:09:47.902 "nvme_io": false, 00:09:47.902 "nvme_io_md": false, 00:09:47.902 "write_zeroes": true, 00:09:47.902 "zcopy": true, 00:09:47.902 "get_zone_info": false, 00:09:47.902 "zone_management": false, 00:09:47.902 "zone_append": false, 00:09:47.902 "compare": false, 00:09:47.902 "compare_and_write": false, 00:09:47.902 "abort": true, 00:09:47.902 "seek_hole": false, 00:09:47.902 "seek_data": false, 00:09:47.902 "copy": true, 00:09:47.902 "nvme_iov_md": false 00:09:47.902 }, 00:09:47.902 "memory_domains": [ 00:09:47.902 { 00:09:47.902 "dma_device_id": "system", 00:09:47.902 "dma_device_type": 1 00:09:47.902 }, 00:09:47.902 { 00:09:47.902 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:47.902 "dma_device_type": 2 00:09:47.902 } 00:09:47.902 ], 00:09:47.902 "driver_specific": {} 00:09:47.902 } 00:09:47.902 ]' 00:09:47.902 12:10:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:47.902 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:48.472 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:09:48.472 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:09:48.472 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:09:48.472 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:09:48.472 12:10:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:09:51.007 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:09:51.008 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:09:51.008 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:09:51.008 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:09:51.008 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:09:51.008 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:09:51.008 12:10:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:09:51.266 12:10:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:09:52.201 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:09:52.201 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:09:52.201 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:52.201 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:52.201 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:52.461 ************************************ 00:09:52.461 START TEST filesystem_ext4 00:09:52.461 ************************************ 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # local force 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:09:52.461 12:10:45 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:09:52.461 mke2fs 1.46.5 (30-Dec-2021) 00:09:52.461 Discarding device blocks: 0/522240 done 00:09:52.461 Creating filesystem with 522240 1k blocks and 130560 inodes 00:09:52.461 Filesystem UUID: e3af4b1e-af86-452b-a3bb-84118ad2356c 00:09:52.461 Superblock backups stored on blocks: 00:09:52.461 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:09:52.461 00:09:52.461 Allocating group tables: 0/64 done 00:09:52.461 Writing inode tables: 0/64 done 00:09:52.721 Creating journal (8192 blocks): done 00:09:52.980 Writing superblocks and filesystem accounting information: 0/64 4/64 done 00:09:52.980 00:09:52.980 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@945 -- # return 0 00:09:52.980 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 2816395 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:53.238 00:09:53.238 real 0m0.992s 00:09:53.238 user 0m0.010s 00:09:53.238 sys 0m0.064s 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:09:53.238 ************************************ 00:09:53.238 END TEST filesystem_ext4 00:09:53.238 ************************************ 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:53.238 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:53.497 ************************************ 00:09:53.497 START TEST filesystem_btrfs 00:09:53.497 ************************************ 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # local force 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:09:53.497 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:09:53.757 btrfs-progs v6.6.2 00:09:53.757 See https://btrfs.readthedocs.io for more information. 00:09:53.757 00:09:53.757 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:09:53.757 NOTE: several default settings have changed in version 5.15, please make sure 00:09:53.757 this does not affect your deployments: 00:09:53.757 - DUP for metadata (-m dup) 00:09:53.757 - enabled no-holes (-O no-holes) 00:09:53.757 - enabled free-space-tree (-R free-space-tree) 00:09:53.757 00:09:53.757 Label: (null) 00:09:53.757 UUID: 7d97cf02-4d51-49d3-8a2b-4f600fb42a8e 00:09:53.757 Node size: 16384 00:09:53.757 Sector size: 4096 00:09:53.757 Filesystem size: 510.00MiB 00:09:53.757 Block group profiles: 00:09:53.757 Data: single 8.00MiB 00:09:53.757 Metadata: DUP 32.00MiB 00:09:53.757 System: DUP 8.00MiB 00:09:53.757 SSD detected: yes 00:09:53.757 Zoned device: no 00:09:53.757 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:09:53.757 Runtime features: free-space-tree 00:09:53.757 Checksum: crc32c 00:09:53.757 Number of devices: 1 00:09:53.757 Devices: 00:09:53.757 ID SIZE PATH 00:09:53.757 1 510.00MiB /dev/nvme0n1p1 00:09:53.757 00:09:53.757 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@945 -- # return 0 00:09:53.757 12:10:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 2816395 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:55.136 00:09:55.136 real 0m1.610s 00:09:55.136 user 0m0.021s 00:09:55.136 sys 0m0.125s 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:09:55.136 ************************************ 00:09:55.136 END TEST filesystem_btrfs 00:09:55.136 ************************************ 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:55.136 ************************************ 00:09:55.136 START TEST filesystem_xfs 00:09:55.136 ************************************ 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@928 -- # local i=0 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # local force 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@934 -- # force=-f 00:09:55.136 12:10:48 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:09:55.136 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:09:55.136 = sectsz=512 attr=2, projid32bit=1 00:09:55.136 = crc=1 finobt=1, sparse=1, rmapbt=0 00:09:55.136 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:09:55.136 data = bsize=4096 blocks=130560, imaxpct=25 00:09:55.136 = sunit=0 swidth=0 blks 00:09:55.136 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:09:55.136 log =internal log bsize=4096 blocks=16384, version=2 00:09:55.136 = sectsz=512 sunit=0 blks, lazy-count=1 00:09:55.136 realtime =none extsz=4096 blocks=0, rtextents=0 00:09:56.074 Discarding blocks...Done. 00:09:56.074 12:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@945 -- # return 0 00:09:56.074 12:10:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 2816395 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:09:58.611 00:09:58.611 real 0m3.370s 00:09:58.611 user 0m0.020s 00:09:58.611 sys 0m0.054s 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:09:58.611 ************************************ 00:09:58.611 END TEST filesystem_xfs 00:09:58.611 ************************************ 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:58.611 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:09:58.611 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 2816395 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2816395 ']' 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2816395 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # uname 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2816395 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2816395' 00:09:58.612 killing process with pid 2816395 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@969 -- # kill 2816395 00:09:58.612 12:10:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@974 -- # wait 2816395 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:09:59.179 00:09:59.179 real 0m11.918s 00:09:59.179 user 0m45.531s 00:09:59.179 sys 0m1.804s 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.179 ************************************ 00:09:59.179 END TEST nvmf_filesystem_no_in_capsule 00:09:59.179 ************************************ 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1107 -- # xtrace_disable 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:09:59.179 ************************************ 00:09:59.179 START TEST nvmf_filesystem_in_capsule 00:09:59.179 ************************************ 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1125 -- # nvmf_filesystem_part 4096 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@724 -- # xtrace_disable 00:09:59.179 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=2818069 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 2818069 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@831 -- # '[' -z 2818069 ']' 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # local max_retries=100 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:59.180 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # xtrace_disable 00:09:59.180 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.180 [2024-07-26 12:10:52.422915] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:09:59.180 [2024-07-26 12:10:52.423003] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:59.438 EAL: No free 2048 kB hugepages reported on node 1 00:09:59.438 [2024-07-26 12:10:52.488726] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:59.438 [2024-07-26 12:10:52.597711] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:59.438 [2024-07-26 12:10:52.597770] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:59.438 [2024-07-26 12:10:52.597798] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:59.438 [2024-07-26 12:10:52.597810] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:59.438 [2024-07-26 12:10:52.597819] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:59.438 [2024-07-26 12:10:52.597904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:59.438 [2024-07-26 12:10:52.597970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:59.438 [2024-07-26 12:10:52.598038] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:59.438 [2024-07-26 12:10:52.598036] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # return 0 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@730 -- # xtrace_disable 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.695 [2024-07-26 12:10:52.741239] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.695 Malloc1 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.695 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.696 [2024-07-26 12:10:52.914918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:09:59.696 { 00:09:59.696 "name": "Malloc1", 00:09:59.696 "aliases": [ 00:09:59.696 "8f42feac-c3ef-43d6-93de-4b6bf8551260" 00:09:59.696 ], 00:09:59.696 "product_name": "Malloc disk", 00:09:59.696 "block_size": 512, 00:09:59.696 "num_blocks": 1048576, 00:09:59.696 "uuid": "8f42feac-c3ef-43d6-93de-4b6bf8551260", 00:09:59.696 "assigned_rate_limits": { 00:09:59.696 "rw_ios_per_sec": 0, 00:09:59.696 "rw_mbytes_per_sec": 0, 00:09:59.696 "r_mbytes_per_sec": 0, 00:09:59.696 "w_mbytes_per_sec": 0 00:09:59.696 }, 00:09:59.696 "claimed": true, 00:09:59.696 "claim_type": "exclusive_write", 00:09:59.696 "zoned": false, 00:09:59.696 "supported_io_types": { 00:09:59.696 "read": true, 00:09:59.696 "write": true, 00:09:59.696 "unmap": true, 00:09:59.696 "flush": true, 00:09:59.696 "reset": true, 00:09:59.696 "nvme_admin": false, 00:09:59.696 "nvme_io": false, 00:09:59.696 "nvme_io_md": false, 00:09:59.696 "write_zeroes": true, 00:09:59.696 "zcopy": true, 00:09:59.696 "get_zone_info": false, 00:09:59.696 "zone_management": false, 00:09:59.696 "zone_append": false, 00:09:59.696 "compare": false, 00:09:59.696 "compare_and_write": false, 00:09:59.696 "abort": true, 00:09:59.696 "seek_hole": false, 00:09:59.696 "seek_data": false, 00:09:59.696 "copy": true, 00:09:59.696 "nvme_iov_md": false 00:09:59.696 }, 00:09:59.696 "memory_domains": [ 00:09:59.696 { 00:09:59.696 "dma_device_id": "system", 00:09:59.696 "dma_device_type": 1 00:09:59.696 }, 00:09:59.696 { 00:09:59.696 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:09:59.696 "dma_device_type": 2 00:09:59.696 } 00:09:59.696 ], 00:09:59.696 "driver_specific": {} 00:09:59.696 } 00:09:59.696 ]' 00:09:59.696 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:09:59.955 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:09:59.955 12:10:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:09:59.955 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:09:59.955 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:09:59.955 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:09:59.955 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:09:59.955 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:00.525 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:00.525 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:10:00.525 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:00.525 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:00.525 12:10:53 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:03.060 12:10:55 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:03.628 12:10:56 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:04.565 ************************************ 00:10:04.565 START TEST filesystem_in_capsule_ext4 00:10:04.565 ************************************ 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local fstype=ext4 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@928 -- # local i=0 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # local force 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # '[' ext4 = ext4 ']' 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # force=-F 00:10:04.565 12:10:57 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@937 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:04.565 mke2fs 1.46.5 (30-Dec-2021) 00:10:04.565 Discarding device blocks: 0/522240 done 00:10:04.565 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:04.565 Filesystem UUID: 32d24d31-b286-474f-bde8-447928d1fea8 00:10:04.565 Superblock backups stored on blocks: 00:10:04.565 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:04.565 00:10:04.566 Allocating group tables: 0/64 done 00:10:04.566 Writing inode tables: 0/64 done 00:10:04.852 Creating journal (8192 blocks): done 00:10:05.790 Writing superblocks and filesystem accounting information: 0/64 done 00:10:05.790 00:10:05.790 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@945 -- # return 0 00:10:05.790 12:10:58 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:06.048 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:06.048 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:10:06.048 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 2818069 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:06.049 00:10:06.049 real 0m1.482s 00:10:06.049 user 0m0.020s 00:10:06.049 sys 0m0.055s 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 ************************************ 00:10:06.049 END TEST filesystem_in_capsule_ext4 00:10:06.049 ************************************ 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:06.049 ************************************ 00:10:06.049 START TEST filesystem_in_capsule_btrfs 00:10:06.049 ************************************ 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local fstype=btrfs 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@928 -- # local i=0 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # local force 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # '[' btrfs = ext4 ']' 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@934 -- # force=-f 00:10:06.049 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@937 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:06.307 btrfs-progs v6.6.2 00:10:06.307 See https://btrfs.readthedocs.io for more information. 00:10:06.307 00:10:06.307 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:06.307 NOTE: several default settings have changed in version 5.15, please make sure 00:10:06.307 this does not affect your deployments: 00:10:06.307 - DUP for metadata (-m dup) 00:10:06.307 - enabled no-holes (-O no-holes) 00:10:06.307 - enabled free-space-tree (-R free-space-tree) 00:10:06.307 00:10:06.307 Label: (null) 00:10:06.307 UUID: 281a6dbf-6604-41ec-a9ff-ddb961035aca 00:10:06.307 Node size: 16384 00:10:06.307 Sector size: 4096 00:10:06.307 Filesystem size: 510.00MiB 00:10:06.307 Block group profiles: 00:10:06.307 Data: single 8.00MiB 00:10:06.307 Metadata: DUP 32.00MiB 00:10:06.307 System: DUP 8.00MiB 00:10:06.307 SSD detected: yes 00:10:06.307 Zoned device: no 00:10:06.307 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:10:06.307 Runtime features: free-space-tree 00:10:06.307 Checksum: crc32c 00:10:06.307 Number of devices: 1 00:10:06.307 Devices: 00:10:06.307 ID SIZE PATH 00:10:06.307 1 510.00MiB /dev/nvme0n1p1 00:10:06.307 00:10:06.307 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@945 -- # return 0 00:10:06.307 12:10:59 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 2818069 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:07.245 00:10:07.245 real 0m1.105s 00:10:07.245 user 0m0.020s 00:10:07.245 sys 0m0.118s 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:07.245 ************************************ 00:10:07.245 END TEST filesystem_in_capsule_btrfs 00:10:07.245 ************************************ 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:07.245 ************************************ 00:10:07.245 START TEST filesystem_in_capsule_xfs 00:10:07.245 ************************************ 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1125 -- # nvmf_filesystem_create xfs nvme0n1 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local fstype=xfs 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local dev_name=/dev/nvme0n1p1 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@928 -- # local i=0 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # local force 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # '[' xfs = ext4 ']' 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@934 -- # force=-f 00:10:07.245 12:11:00 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@937 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:07.245 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:07.245 = sectsz=512 attr=2, projid32bit=1 00:10:07.245 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:07.245 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:07.245 data = bsize=4096 blocks=130560, imaxpct=25 00:10:07.245 = sunit=0 swidth=0 blks 00:10:07.245 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:07.245 log =internal log bsize=4096 blocks=16384, version=2 00:10:07.245 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:07.245 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:08.180 Discarding blocks...Done. 00:10:08.180 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@945 -- # return 0 00:10:08.180 12:11:01 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 2818069 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:10.082 00:10:10.082 real 0m2.595s 00:10:10.082 user 0m0.017s 00:10:10.082 sys 0m0.061s 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:10.082 ************************************ 00:10:10.082 END TEST filesystem_in_capsule_xfs 00:10:10.082 ************************************ 00:10:10.082 12:11:02 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:10.082 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 2818069 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@950 -- # '[' -z 2818069 ']' 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # kill -0 2818069 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # uname 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:10.082 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2818069 00:10:10.340 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:10.340 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:10.340 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2818069' 00:10:10.340 killing process with pid 2818069 00:10:10.340 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@969 -- # kill 2818069 00:10:10.340 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@974 -- # wait 2818069 00:10:10.598 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:10.598 00:10:10.598 real 0m11.446s 00:10:10.598 user 0m43.654s 00:10:10.598 sys 0m1.788s 00:10:10.598 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:10.598 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:10.598 ************************************ 00:10:10.598 END TEST nvmf_filesystem_in_capsule 00:10:10.598 ************************************ 00:10:10.598 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:10:10.598 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:10.599 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:10:10.599 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:10.599 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:10:10.599 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:10.599 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:10.858 rmmod nvme_tcp 00:10:10.858 rmmod nvme_fabrics 00:10:10.858 rmmod nvme_keyring 00:10:10.858 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:10.858 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:10:10.858 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:10.859 12:11:03 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:12.770 00:10:12.770 real 0m27.864s 00:10:12.770 user 1m30.103s 00:10:12.770 sys 0m5.178s 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 ************************************ 00:10:12.770 END TEST nvmf_filesystem 00:10:12.770 ************************************ 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:12.770 ************************************ 00:10:12.770 START TEST nvmf_target_discovery 00:10:12.770 ************************************ 00:10:12.770 12:11:05 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:10:13.030 * Looking for test storage... 00:10:13.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:10:13.030 12:11:06 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:14.941 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:14.941 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:14.941 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:14.941 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:10:14.941 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:14.942 12:11:07 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:14.942 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.942 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:10:14.942 00:10:14.942 --- 10.0.0.2 ping statistics --- 00:10:14.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.942 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:14.942 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.942 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.103 ms 00:10:14.942 00:10:14.942 --- 10.0.0.1 ping statistics --- 00:10:14.942 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.942 rtt min/avg/max/mdev = 0.103/0.103/0.103/0.000 ms 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=2821528 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 2821528 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@831 -- # '[' -z 2821528 ']' 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.942 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:14.942 12:11:08 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:14.942 [2024-07-26 12:11:08.173524] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:10:14.942 [2024-07-26 12:11:08.173617] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:15.202 EAL: No free 2048 kB hugepages reported on node 1 00:10:15.202 [2024-07-26 12:11:08.245187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:15.202 [2024-07-26 12:11:08.366181] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:15.202 [2024-07-26 12:11:08.366245] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:15.202 [2024-07-26 12:11:08.366272] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:15.202 [2024-07-26 12:11:08.366286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:15.202 [2024-07-26 12:11:08.366298] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:15.202 [2024-07-26 12:11:08.366368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:15.202 [2024-07-26 12:11:08.366423] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:15.202 [2024-07-26 12:11:08.366478] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:15.202 [2024-07-26 12:11:08.366481] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # return 0 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 [2024-07-26 12:11:09.157614] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 Null1 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 [2024-07-26 12:11:09.197886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 Null2 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 Null3 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.138 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 Null4 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.139 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:10:16.398 00:10:16.398 Discovery Log Number of Records 6, Generation counter 6 00:10:16.398 =====Discovery Log Entry 0====== 00:10:16.398 trtype: tcp 00:10:16.398 adrfam: ipv4 00:10:16.398 subtype: current discovery subsystem 00:10:16.398 treq: not required 00:10:16.398 portid: 0 00:10:16.398 trsvcid: 4420 00:10:16.398 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:16.398 traddr: 10.0.0.2 00:10:16.398 eflags: explicit discovery connections, duplicate discovery information 00:10:16.398 sectype: none 00:10:16.398 =====Discovery Log Entry 1====== 00:10:16.398 trtype: tcp 00:10:16.398 adrfam: ipv4 00:10:16.398 subtype: nvme subsystem 00:10:16.398 treq: not required 00:10:16.398 portid: 0 00:10:16.398 trsvcid: 4420 00:10:16.398 subnqn: nqn.2016-06.io.spdk:cnode1 00:10:16.398 traddr: 10.0.0.2 00:10:16.398 eflags: none 00:10:16.398 sectype: none 00:10:16.398 =====Discovery Log Entry 2====== 00:10:16.398 trtype: tcp 00:10:16.398 adrfam: ipv4 00:10:16.398 subtype: nvme subsystem 00:10:16.398 treq: not required 00:10:16.398 portid: 0 00:10:16.398 trsvcid: 4420 00:10:16.398 subnqn: nqn.2016-06.io.spdk:cnode2 00:10:16.398 traddr: 10.0.0.2 00:10:16.398 eflags: none 00:10:16.398 sectype: none 00:10:16.398 =====Discovery Log Entry 3====== 00:10:16.398 trtype: tcp 00:10:16.398 adrfam: ipv4 00:10:16.398 subtype: nvme subsystem 00:10:16.398 treq: not required 00:10:16.398 portid: 0 00:10:16.398 trsvcid: 4420 00:10:16.398 subnqn: nqn.2016-06.io.spdk:cnode3 00:10:16.398 traddr: 10.0.0.2 00:10:16.398 eflags: none 00:10:16.398 sectype: none 00:10:16.398 =====Discovery Log Entry 4====== 00:10:16.398 trtype: tcp 00:10:16.398 adrfam: ipv4 00:10:16.398 subtype: nvme subsystem 00:10:16.398 treq: not required 00:10:16.398 portid: 0 00:10:16.398 trsvcid: 4420 00:10:16.398 subnqn: nqn.2016-06.io.spdk:cnode4 00:10:16.398 traddr: 10.0.0.2 00:10:16.398 eflags: none 00:10:16.398 sectype: none 00:10:16.398 =====Discovery Log Entry 5====== 00:10:16.398 trtype: tcp 00:10:16.398 adrfam: ipv4 00:10:16.398 subtype: discovery subsystem referral 00:10:16.398 treq: not required 00:10:16.398 portid: 0 00:10:16.398 trsvcid: 4430 00:10:16.398 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:10:16.398 traddr: 10.0.0.2 00:10:16.398 eflags: none 00:10:16.398 sectype: none 00:10:16.398 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:10:16.398 Perform nvmf subsystem discovery via RPC 00:10:16.398 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:10:16.398 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.398 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.398 [ 00:10:16.398 { 00:10:16.398 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:10:16.398 "subtype": "Discovery", 00:10:16.398 "listen_addresses": [ 00:10:16.398 { 00:10:16.398 "trtype": "TCP", 00:10:16.398 "adrfam": "IPv4", 00:10:16.398 "traddr": "10.0.0.2", 00:10:16.398 "trsvcid": "4420" 00:10:16.398 } 00:10:16.398 ], 00:10:16.398 "allow_any_host": true, 00:10:16.398 "hosts": [] 00:10:16.398 }, 00:10:16.398 { 00:10:16.398 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:10:16.398 "subtype": "NVMe", 00:10:16.398 "listen_addresses": [ 00:10:16.398 { 00:10:16.398 "trtype": "TCP", 00:10:16.398 "adrfam": "IPv4", 00:10:16.398 "traddr": "10.0.0.2", 00:10:16.398 "trsvcid": "4420" 00:10:16.398 } 00:10:16.398 ], 00:10:16.398 "allow_any_host": true, 00:10:16.398 "hosts": [], 00:10:16.398 "serial_number": "SPDK00000000000001", 00:10:16.398 "model_number": "SPDK bdev Controller", 00:10:16.398 "max_namespaces": 32, 00:10:16.398 "min_cntlid": 1, 00:10:16.398 "max_cntlid": 65519, 00:10:16.398 "namespaces": [ 00:10:16.398 { 00:10:16.398 "nsid": 1, 00:10:16.398 "bdev_name": "Null1", 00:10:16.398 "name": "Null1", 00:10:16.398 "nguid": "74286222BF9E421F8132F2AD26C0DE9F", 00:10:16.398 "uuid": "74286222-bf9e-421f-8132-f2ad26c0de9f" 00:10:16.398 } 00:10:16.398 ] 00:10:16.399 }, 00:10:16.399 { 00:10:16.399 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:16.399 "subtype": "NVMe", 00:10:16.399 "listen_addresses": [ 00:10:16.399 { 00:10:16.399 "trtype": "TCP", 00:10:16.399 "adrfam": "IPv4", 00:10:16.399 "traddr": "10.0.0.2", 00:10:16.399 "trsvcid": "4420" 00:10:16.399 } 00:10:16.399 ], 00:10:16.399 "allow_any_host": true, 00:10:16.399 "hosts": [], 00:10:16.399 "serial_number": "SPDK00000000000002", 00:10:16.399 "model_number": "SPDK bdev Controller", 00:10:16.399 "max_namespaces": 32, 00:10:16.399 "min_cntlid": 1, 00:10:16.399 "max_cntlid": 65519, 00:10:16.399 "namespaces": [ 00:10:16.399 { 00:10:16.399 "nsid": 1, 00:10:16.399 "bdev_name": "Null2", 00:10:16.399 "name": "Null2", 00:10:16.399 "nguid": "23CA7B1548704D908BCBE32C7031C3DC", 00:10:16.399 "uuid": "23ca7b15-4870-4d90-8bcb-e32c7031c3dc" 00:10:16.399 } 00:10:16.399 ] 00:10:16.399 }, 00:10:16.399 { 00:10:16.399 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:10:16.399 "subtype": "NVMe", 00:10:16.399 "listen_addresses": [ 00:10:16.399 { 00:10:16.399 "trtype": "TCP", 00:10:16.399 "adrfam": "IPv4", 00:10:16.399 "traddr": "10.0.0.2", 00:10:16.399 "trsvcid": "4420" 00:10:16.399 } 00:10:16.399 ], 00:10:16.399 "allow_any_host": true, 00:10:16.399 "hosts": [], 00:10:16.399 "serial_number": "SPDK00000000000003", 00:10:16.399 "model_number": "SPDK bdev Controller", 00:10:16.399 "max_namespaces": 32, 00:10:16.399 "min_cntlid": 1, 00:10:16.399 "max_cntlid": 65519, 00:10:16.399 "namespaces": [ 00:10:16.399 { 00:10:16.399 "nsid": 1, 00:10:16.399 "bdev_name": "Null3", 00:10:16.399 "name": "Null3", 00:10:16.399 "nguid": "626410C7B793446A9B60C14BC9D792EA", 00:10:16.399 "uuid": "626410c7-b793-446a-9b60-c14bc9d792ea" 00:10:16.399 } 00:10:16.399 ] 00:10:16.399 }, 00:10:16.399 { 00:10:16.399 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:10:16.399 "subtype": "NVMe", 00:10:16.399 "listen_addresses": [ 00:10:16.399 { 00:10:16.399 "trtype": "TCP", 00:10:16.399 "adrfam": "IPv4", 00:10:16.399 "traddr": "10.0.0.2", 00:10:16.399 "trsvcid": "4420" 00:10:16.399 } 00:10:16.399 ], 00:10:16.399 "allow_any_host": true, 00:10:16.399 "hosts": [], 00:10:16.399 "serial_number": "SPDK00000000000004", 00:10:16.399 "model_number": "SPDK bdev Controller", 00:10:16.399 "max_namespaces": 32, 00:10:16.399 "min_cntlid": 1, 00:10:16.399 "max_cntlid": 65519, 00:10:16.399 "namespaces": [ 00:10:16.399 { 00:10:16.399 "nsid": 1, 00:10:16.399 "bdev_name": "Null4", 00:10:16.399 "name": "Null4", 00:10:16.399 "nguid": "09509A80BC6B4E2ABFDFCD680C7FDFA5", 00:10:16.399 "uuid": "09509a80-bc6b-4e2a-bfdf-cd680c7fdfa5" 00:10:16.399 } 00:10:16.399 ] 00:10:16.399 } 00:10:16.399 ] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:10:16.399 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.400 rmmod nvme_tcp 00:10:16.400 rmmod nvme_fabrics 00:10:16.400 rmmod nvme_keyring 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 2821528 ']' 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 2821528 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@950 -- # '[' -z 2821528 ']' 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # kill -0 2821528 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # uname 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2821528 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2821528' 00:10:16.400 killing process with pid 2821528 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@969 -- # kill 2821528 00:10:16.400 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@974 -- # wait 2821528 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.658 12:11:09 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:19.199 00:10:19.199 real 0m5.931s 00:10:19.199 user 0m6.909s 00:10:19.199 sys 0m1.745s 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:10:19.199 ************************************ 00:10:19.199 END TEST nvmf_target_discovery 00:10:19.199 ************************************ 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:19.199 ************************************ 00:10:19.199 START TEST nvmf_referrals 00:10:19.199 ************************************ 00:10:19.199 12:11:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:10:19.199 * Looking for test storage... 00:10:19.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:19.199 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:19.200 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:19.200 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:19.200 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:19.200 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:19.200 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:10:19.200 12:11:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:21.105 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:21.105 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:21.105 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:21.105 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:21.105 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:21.106 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:21.106 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:10:21.106 00:10:21.106 --- 10.0.0.2 ping statistics --- 00:10:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.106 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:21.106 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:21.106 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.182 ms 00:10:21.106 00:10:21.106 --- 10.0.0.1 ping statistics --- 00:10:21.106 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:21.106 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=2823629 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 2823629 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@831 -- # '[' -z 2823629 ']' 00:10:21.106 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:21.366 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:21.366 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:21.366 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:21.366 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:21.366 12:11:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:21.366 [2024-07-26 12:11:14.401956] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:10:21.367 [2024-07-26 12:11:14.402027] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:21.367 EAL: No free 2048 kB hugepages reported on node 1 00:10:21.367 [2024-07-26 12:11:14.468287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:21.367 [2024-07-26 12:11:14.585302] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:21.367 [2024-07-26 12:11:14.585364] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:21.367 [2024-07-26 12:11:14.585399] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:21.367 [2024-07-26 12:11:14.585413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:21.367 [2024-07-26 12:11:14.585425] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:21.367 [2024-07-26 12:11:14.585534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:21.367 [2024-07-26 12:11:14.585618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:21.367 [2024-07-26 12:11:14.585710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:21.367 [2024-07-26 12:11:14.585713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # return 0 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 [2024-07-26 12:11:15.406516] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 [2024-07-26 12:11:15.418694] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:22.304 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:22.563 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:22.848 12:11:15 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:10:23.110 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:10:23.111 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.370 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 8009 -o json 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:10:23.629 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:23.889 rmmod nvme_tcp 00:10:23.889 rmmod nvme_fabrics 00:10:23.889 rmmod nvme_keyring 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 2823629 ']' 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 2823629 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@950 -- # '[' -z 2823629 ']' 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # kill -0 2823629 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # uname 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:23.889 12:11:16 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2823629 00:10:23.889 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:23.889 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:23.889 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2823629' 00:10:23.889 killing process with pid 2823629 00:10:23.889 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@969 -- # kill 2823629 00:10:23.889 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@974 -- # wait 2823629 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:24.149 12:11:17 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:26.687 00:10:26.687 real 0m7.354s 00:10:26.687 user 0m12.274s 00:10:26.687 sys 0m2.220s 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:10:26.687 ************************************ 00:10:26.687 END TEST nvmf_referrals 00:10:26.687 ************************************ 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:26.687 12:11:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:26.687 ************************************ 00:10:26.687 START TEST nvmf_connect_disconnect 00:10:26.687 ************************************ 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:10:26.688 * Looking for test storage... 00:10:26.688 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:10:26.688 12:11:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:28.593 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:28.593 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:28.593 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:28.593 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:28.593 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:28.594 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:28.594 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:10:28.594 00:10:28.594 --- 10.0.0.2 ping statistics --- 00:10:28.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.594 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:28.594 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:28.594 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.168 ms 00:10:28.594 00:10:28.594 --- 10.0.0.1 ping statistics --- 00:10:28.594 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:28.594 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=2826048 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 2826048 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@831 -- # '[' -z 2826048 ']' 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:28.594 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:28.594 12:11:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.594 [2024-07-26 12:11:21.684001] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:10:28.594 [2024-07-26 12:11:21.684118] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:28.594 EAL: No free 2048 kB hugepages reported on node 1 00:10:28.594 [2024-07-26 12:11:21.762525] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:28.854 [2024-07-26 12:11:21.888813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:28.854 [2024-07-26 12:11:21.888873] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:28.854 [2024-07-26 12:11:21.888890] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:28.854 [2024-07-26 12:11:21.888903] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:28.854 [2024-07-26 12:11:21.888915] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:28.854 [2024-07-26 12:11:21.888976] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:28.854 [2024-07-26 12:11:21.889008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:28.854 [2024-07-26 12:11:21.889068] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:28.855 [2024-07-26 12:11:21.889070] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # return 0 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.855 [2024-07-26 12:11:22.055710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:28.855 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:29.113 [2024-07-26 12:11:22.116874] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:10:29.113 12:11:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:10:31.670 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:34.958 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:37.493 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:40.035 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.567 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:42.827 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:42.827 rmmod nvme_tcp 00:10:42.828 rmmod nvme_fabrics 00:10:42.828 rmmod nvme_keyring 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 2826048 ']' 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 2826048 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2826048 ']' 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # kill -0 2826048 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # uname 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2826048 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2826048' 00:10:42.828 killing process with pid 2826048 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@969 -- # kill 2826048 00:10:42.828 12:11:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@974 -- # wait 2826048 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:43.087 12:11:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:45.022 00:10:45.022 real 0m18.858s 00:10:45.022 user 0m56.655s 00:10:45.022 sys 0m3.250s 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:10:45.022 ************************************ 00:10:45.022 END TEST nvmf_connect_disconnect 00:10:45.022 ************************************ 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:45.022 12:11:38 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:45.281 ************************************ 00:10:45.281 START TEST nvmf_multitarget 00:10:45.281 ************************************ 00:10:45.281 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:10:45.281 * Looking for test storage... 00:10:45.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:10:45.282 12:11:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:47.187 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:47.188 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:47.188 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:47.188 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:47.188 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:47.188 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:47.447 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:47.447 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:10:47.447 00:10:47.447 --- 10.0.0.2 ping statistics --- 00:10:47.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.447 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:47.447 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:47.447 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:10:47.447 00:10:47.447 --- 10.0.0.1 ping statistics --- 00:10:47.447 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:47.447 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:47.447 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=2829682 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 2829682 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@831 -- # '[' -z 2829682 ']' 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:47.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:47.448 12:11:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:47.448 [2024-07-26 12:11:40.524215] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:10:47.448 [2024-07-26 12:11:40.524302] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:47.448 EAL: No free 2048 kB hugepages reported on node 1 00:10:47.448 [2024-07-26 12:11:40.593319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:47.706 [2024-07-26 12:11:40.714179] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:47.706 [2024-07-26 12:11:40.714236] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:47.706 [2024-07-26 12:11:40.714263] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:47.706 [2024-07-26 12:11:40.714276] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:47.706 [2024-07-26 12:11:40.714287] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:47.706 [2024-07-26 12:11:40.714343] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:47.706 [2024-07-26 12:11:40.714397] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:47.706 [2024-07-26 12:11:40.714449] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:47.706 [2024-07-26 12:11:40.714452] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # return 0 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.273 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:10:48.531 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:10:48.531 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:10:48.531 "nvmf_tgt_1" 00:10:48.531 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:10:48.789 "nvmf_tgt_2" 00:10:48.789 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:48.789 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:10:48.789 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:10:48.789 12:11:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:10:48.789 true 00:10:48.789 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:10:49.047 true 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:49.047 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:49.047 rmmod nvme_tcp 00:10:49.047 rmmod nvme_fabrics 00:10:49.047 rmmod nvme_keyring 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 2829682 ']' 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 2829682 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@950 -- # '[' -z 2829682 ']' 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # kill -0 2829682 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # uname 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2829682 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2829682' 00:10:49.305 killing process with pid 2829682 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@969 -- # kill 2829682 00:10:49.305 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@974 -- # wait 2829682 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:49.565 12:11:42 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:51.482 00:10:51.482 real 0m6.368s 00:10:51.482 user 0m9.087s 00:10:51.482 sys 0m1.936s 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1126 -- # xtrace_disable 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:10:51.482 ************************************ 00:10:51.482 END TEST nvmf_multitarget 00:10:51.482 ************************************ 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:51.482 ************************************ 00:10:51.482 START TEST nvmf_rpc 00:10:51.482 ************************************ 00:10:51.482 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:10:51.741 * Looking for test storage... 00:10:51.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:51.741 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:10:51.742 12:11:44 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:10:53.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:10:53.645 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:10:53.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:10:53.645 Found net devices under 0000:0a:00.1: cvl_0_1 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:53.645 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:53.907 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:53.907 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.242 ms 00:10:53.907 00:10:53.907 --- 10.0.0.2 ping statistics --- 00:10:53.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.907 rtt min/avg/max/mdev = 0.242/0.242/0.242/0.000 ms 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:53.907 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:53.907 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:10:53.907 00:10:53.907 --- 10.0.0.1 ping statistics --- 00:10:53.907 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:53.907 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@724 -- # xtrace_disable 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=2831914 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 2831914 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@831 -- # '[' -z 2831914 ']' 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # local max_retries=100 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:53.907 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # xtrace_disable 00:10:53.907 12:11:46 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:53.907 [2024-07-26 12:11:47.026361] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:10:53.907 [2024-07-26 12:11:47.026463] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:53.907 EAL: No free 2048 kB hugepages reported on node 1 00:10:53.907 [2024-07-26 12:11:47.091416] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.167 [2024-07-26 12:11:47.203357] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.167 [2024-07-26 12:11:47.203414] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.167 [2024-07-26 12:11:47.203428] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.167 [2024-07-26 12:11:47.203440] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.167 [2024-07-26 12:11:47.203450] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.167 [2024-07-26 12:11:47.203510] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.167 [2024-07-26 12:11:47.203566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.167 [2024-07-26 12:11:47.203630] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.167 [2024-07-26 12:11:47.203633] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # return 0 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@730 -- # xtrace_disable 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:10:54.167 "tick_rate": 2700000000, 00:10:54.167 "poll_groups": [ 00:10:54.167 { 00:10:54.167 "name": "nvmf_tgt_poll_group_000", 00:10:54.167 "admin_qpairs": 0, 00:10:54.167 "io_qpairs": 0, 00:10:54.167 "current_admin_qpairs": 0, 00:10:54.167 "current_io_qpairs": 0, 00:10:54.167 "pending_bdev_io": 0, 00:10:54.167 "completed_nvme_io": 0, 00:10:54.167 "transports": [] 00:10:54.167 }, 00:10:54.167 { 00:10:54.167 "name": "nvmf_tgt_poll_group_001", 00:10:54.167 "admin_qpairs": 0, 00:10:54.167 "io_qpairs": 0, 00:10:54.167 "current_admin_qpairs": 0, 00:10:54.167 "current_io_qpairs": 0, 00:10:54.167 "pending_bdev_io": 0, 00:10:54.167 "completed_nvme_io": 0, 00:10:54.167 "transports": [] 00:10:54.167 }, 00:10:54.167 { 00:10:54.167 "name": "nvmf_tgt_poll_group_002", 00:10:54.167 "admin_qpairs": 0, 00:10:54.167 "io_qpairs": 0, 00:10:54.167 "current_admin_qpairs": 0, 00:10:54.167 "current_io_qpairs": 0, 00:10:54.167 "pending_bdev_io": 0, 00:10:54.167 "completed_nvme_io": 0, 00:10:54.167 "transports": [] 00:10:54.167 }, 00:10:54.167 { 00:10:54.167 "name": "nvmf_tgt_poll_group_003", 00:10:54.167 "admin_qpairs": 0, 00:10:54.167 "io_qpairs": 0, 00:10:54.167 "current_admin_qpairs": 0, 00:10:54.167 "current_io_qpairs": 0, 00:10:54.167 "pending_bdev_io": 0, 00:10:54.167 "completed_nvme_io": 0, 00:10:54.167 "transports": [] 00:10:54.167 } 00:10:54.167 ] 00:10:54.167 }' 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:10:54.167 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 [2024-07-26 12:11:47.458856] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:10:54.426 "tick_rate": 2700000000, 00:10:54.426 "poll_groups": [ 00:10:54.426 { 00:10:54.426 "name": "nvmf_tgt_poll_group_000", 00:10:54.426 "admin_qpairs": 0, 00:10:54.426 "io_qpairs": 0, 00:10:54.426 "current_admin_qpairs": 0, 00:10:54.426 "current_io_qpairs": 0, 00:10:54.426 "pending_bdev_io": 0, 00:10:54.426 "completed_nvme_io": 0, 00:10:54.426 "transports": [ 00:10:54.426 { 00:10:54.426 "trtype": "TCP" 00:10:54.426 } 00:10:54.426 ] 00:10:54.426 }, 00:10:54.426 { 00:10:54.426 "name": "nvmf_tgt_poll_group_001", 00:10:54.426 "admin_qpairs": 0, 00:10:54.426 "io_qpairs": 0, 00:10:54.426 "current_admin_qpairs": 0, 00:10:54.426 "current_io_qpairs": 0, 00:10:54.426 "pending_bdev_io": 0, 00:10:54.426 "completed_nvme_io": 0, 00:10:54.426 "transports": [ 00:10:54.426 { 00:10:54.426 "trtype": "TCP" 00:10:54.426 } 00:10:54.426 ] 00:10:54.426 }, 00:10:54.426 { 00:10:54.426 "name": "nvmf_tgt_poll_group_002", 00:10:54.426 "admin_qpairs": 0, 00:10:54.426 "io_qpairs": 0, 00:10:54.426 "current_admin_qpairs": 0, 00:10:54.426 "current_io_qpairs": 0, 00:10:54.426 "pending_bdev_io": 0, 00:10:54.426 "completed_nvme_io": 0, 00:10:54.426 "transports": [ 00:10:54.426 { 00:10:54.426 "trtype": "TCP" 00:10:54.426 } 00:10:54.426 ] 00:10:54.426 }, 00:10:54.426 { 00:10:54.426 "name": "nvmf_tgt_poll_group_003", 00:10:54.426 "admin_qpairs": 0, 00:10:54.426 "io_qpairs": 0, 00:10:54.426 "current_admin_qpairs": 0, 00:10:54.426 "current_io_qpairs": 0, 00:10:54.426 "pending_bdev_io": 0, 00:10:54.426 "completed_nvme_io": 0, 00:10:54.426 "transports": [ 00:10:54.426 { 00:10:54.426 "trtype": "TCP" 00:10:54.426 } 00:10:54.426 ] 00:10:54.426 } 00:10:54.426 ] 00:10:54.426 }' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 Malloc1 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.426 [2024-07-26 12:11:47.620719] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.426 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.2 -s 4420 00:10:54.427 [2024-07-26 12:11:47.643146] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:54.427 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:54.427 could not add new controller: failed to write to nvme-fabrics device 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:54.427 12:11:47 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.364 12:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.364 12:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:55.364 12:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.364 12:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:55.364 12:11:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.268 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # local es=0 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@638 -- # local arg=nvme 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -t nvme 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -P nvme 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # arg=/usr/sbin/nvme 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # [[ -x /usr/sbin/nvme ]] 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:57.268 [2024-07-26 12:11:50.473347] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55' 00:10:57.268 Failed to write to /dev/nvme-fabrics: Input/output error 00:10:57.268 could not add new controller: failed to write to nvme-fabrics device 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@653 -- # es=1 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:10:57.268 12:11:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:58.206 12:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:10:58.206 12:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:10:58.206 12:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:58.206 12:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:58.206 12:11:51 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:00.152 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.152 [2024-07-26 12:11:53.271318] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:00.152 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:00.718 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:00.718 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:00.718 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:00.718 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:00.718 12:11:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:03.293 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:03.293 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:03.293 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:03.293 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:03.293 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:03.293 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:03.294 12:11:55 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:03.294 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.294 [2024-07-26 12:11:56.080669] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:03.294 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:03.554 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:03.554 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:03.554 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:03.554 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:03.554 12:11:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:05.457 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:05.716 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.716 [2024-07-26 12:11:58.852379] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:05.716 12:11:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:06.285 12:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:06.285 12:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:06.285 12:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:06.285 12:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:06.285 12:11:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:08.824 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.824 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.825 [2024-07-26 12:12:01.584823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:08.825 12:12:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:09.085 12:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:09.085 12:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:09.085 12:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:09.085 12:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:09.085 12:12:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:10.990 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:11.250 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:11.250 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:11.250 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:11.250 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:11.250 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.250 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 [2024-07-26 12:12:04.393438] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:11.251 12:12:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:12.187 12:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:11:12.187 12:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:11:12.187 12:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:12.187 12:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:12.187 12:12:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:14.096 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.096 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 [2024-07-26 12:12:07.195225] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 [2024-07-26 12:12:07.243400] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 [2024-07-26 12:12:07.291565] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.097 [2024-07-26 12:12:07.339710] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.097 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 [2024-07-26 12:12:07.387865] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:11:14.357 "tick_rate": 2700000000, 00:11:14.357 "poll_groups": [ 00:11:14.357 { 00:11:14.357 "name": "nvmf_tgt_poll_group_000", 00:11:14.357 "admin_qpairs": 2, 00:11:14.357 "io_qpairs": 84, 00:11:14.357 "current_admin_qpairs": 0, 00:11:14.357 "current_io_qpairs": 0, 00:11:14.357 "pending_bdev_io": 0, 00:11:14.357 "completed_nvme_io": 283, 00:11:14.357 "transports": [ 00:11:14.357 { 00:11:14.357 "trtype": "TCP" 00:11:14.357 } 00:11:14.357 ] 00:11:14.357 }, 00:11:14.357 { 00:11:14.357 "name": "nvmf_tgt_poll_group_001", 00:11:14.357 "admin_qpairs": 2, 00:11:14.357 "io_qpairs": 84, 00:11:14.357 "current_admin_qpairs": 0, 00:11:14.357 "current_io_qpairs": 0, 00:11:14.357 "pending_bdev_io": 0, 00:11:14.357 "completed_nvme_io": 185, 00:11:14.357 "transports": [ 00:11:14.357 { 00:11:14.357 "trtype": "TCP" 00:11:14.357 } 00:11:14.357 ] 00:11:14.357 }, 00:11:14.357 { 00:11:14.357 "name": "nvmf_tgt_poll_group_002", 00:11:14.357 "admin_qpairs": 1, 00:11:14.357 "io_qpairs": 84, 00:11:14.357 "current_admin_qpairs": 0, 00:11:14.357 "current_io_qpairs": 0, 00:11:14.357 "pending_bdev_io": 0, 00:11:14.357 "completed_nvme_io": 134, 00:11:14.357 "transports": [ 00:11:14.357 { 00:11:14.357 "trtype": "TCP" 00:11:14.357 } 00:11:14.357 ] 00:11:14.357 }, 00:11:14.357 { 00:11:14.357 "name": "nvmf_tgt_poll_group_003", 00:11:14.357 "admin_qpairs": 2, 00:11:14.357 "io_qpairs": 84, 00:11:14.357 "current_admin_qpairs": 0, 00:11:14.357 "current_io_qpairs": 0, 00:11:14.357 "pending_bdev_io": 0, 00:11:14.357 "completed_nvme_io": 84, 00:11:14.357 "transports": [ 00:11:14.357 { 00:11:14.357 "trtype": "TCP" 00:11:14.357 } 00:11:14.357 ] 00:11:14.357 } 00:11:14.357 ] 00:11:14.357 }' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 336 > 0 )) 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:14.357 rmmod nvme_tcp 00:11:14.357 rmmod nvme_fabrics 00:11:14.357 rmmod nvme_keyring 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 2831914 ']' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 2831914 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@950 -- # '[' -z 2831914 ']' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # kill -0 2831914 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # uname 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2831914 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2831914' 00:11:14.357 killing process with pid 2831914 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@969 -- # kill 2831914 00:11:14.357 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@974 -- # wait 2831914 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:14.926 12:12:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.832 12:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:16.833 00:11:16.833 real 0m25.200s 00:11:16.833 user 1m21.587s 00:11:16.833 sys 0m4.209s 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:16.833 ************************************ 00:11:16.833 END TEST nvmf_rpc 00:11:16.833 ************************************ 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:16.833 ************************************ 00:11:16.833 START TEST nvmf_invalid 00:11:16.833 ************************************ 00:11:16.833 12:12:09 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:11:16.833 * Looking for test storage... 00:11:16.833 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:11:16.833 12:12:10 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:18.740 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:18.740 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:18.740 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:18.741 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:18.741 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:18.741 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:18.741 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.241 ms 00:11:18.741 00:11:18.741 --- 10.0.0.2 ping statistics --- 00:11:18.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.741 rtt min/avg/max/mdev = 0.241/0.241/0.241/0.000 ms 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:18.741 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:18.741 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.165 ms 00:11:18.741 00:11:18.741 --- 10.0.0.1 ping statistics --- 00:11:18.741 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:18.741 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:18.741 12:12:11 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=2837011 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 2837011 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@831 -- # '[' -z 2837011 ']' 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:19.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:19.000 12:12:12 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:19.000 [2024-07-26 12:12:12.058875] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:11:19.000 [2024-07-26 12:12:12.058969] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:19.000 EAL: No free 2048 kB hugepages reported on node 1 00:11:19.000 [2024-07-26 12:12:12.128271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:19.000 [2024-07-26 12:12:12.248456] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:19.000 [2024-07-26 12:12:12.248520] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:19.000 [2024-07-26 12:12:12.248547] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:19.000 [2024-07-26 12:12:12.248560] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:19.000 [2024-07-26 12:12:12.248572] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:19.000 [2024-07-26 12:12:12.248665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:19.000 [2024-07-26 12:12:12.248731] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:19.000 [2024-07-26 12:12:12.248779] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:19.000 [2024-07-26 12:12:12.248782] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # return 0 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:11:19.936 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode31969 00:11:20.194 [2024-07-26 12:12:13.286506] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:11:20.194 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:11:20.194 { 00:11:20.194 "nqn": "nqn.2016-06.io.spdk:cnode31969", 00:11:20.194 "tgt_name": "foobar", 00:11:20.194 "method": "nvmf_create_subsystem", 00:11:20.194 "req_id": 1 00:11:20.194 } 00:11:20.194 Got JSON-RPC error response 00:11:20.194 response: 00:11:20.194 { 00:11:20.194 "code": -32603, 00:11:20.194 "message": "Unable to find target foobar" 00:11:20.194 }' 00:11:20.194 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:11:20.194 { 00:11:20.194 "nqn": "nqn.2016-06.io.spdk:cnode31969", 00:11:20.194 "tgt_name": "foobar", 00:11:20.194 "method": "nvmf_create_subsystem", 00:11:20.194 "req_id": 1 00:11:20.194 } 00:11:20.194 Got JSON-RPC error response 00:11:20.194 response: 00:11:20.194 { 00:11:20.194 "code": -32603, 00:11:20.194 "message": "Unable to find target foobar" 00:11:20.194 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:11:20.194 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:11:20.194 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode19831 00:11:20.486 [2024-07-26 12:12:13.543338] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode19831: invalid serial number 'SPDKISFASTANDAWESOME' 00:11:20.486 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:11:20.486 { 00:11:20.486 "nqn": "nqn.2016-06.io.spdk:cnode19831", 00:11:20.486 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:20.486 "method": "nvmf_create_subsystem", 00:11:20.486 "req_id": 1 00:11:20.486 } 00:11:20.486 Got JSON-RPC error response 00:11:20.486 response: 00:11:20.486 { 00:11:20.486 "code": -32602, 00:11:20.486 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:20.486 }' 00:11:20.486 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:11:20.486 { 00:11:20.486 "nqn": "nqn.2016-06.io.spdk:cnode19831", 00:11:20.486 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:11:20.486 "method": "nvmf_create_subsystem", 00:11:20.486 "req_id": 1 00:11:20.486 } 00:11:20.486 Got JSON-RPC error response 00:11:20.486 response: 00:11:20.486 { 00:11:20.486 "code": -32602, 00:11:20.486 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:11:20.486 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:20.486 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:11:20.486 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode787 00:11:20.752 [2024-07-26 12:12:13.796148] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode787: invalid model number 'SPDK_Controller' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:11:20.752 { 00:11:20.752 "nqn": "nqn.2016-06.io.spdk:cnode787", 00:11:20.752 "model_number": "SPDK_Controller\u001f", 00:11:20.752 "method": "nvmf_create_subsystem", 00:11:20.752 "req_id": 1 00:11:20.752 } 00:11:20.752 Got JSON-RPC error response 00:11:20.752 response: 00:11:20.752 { 00:11:20.752 "code": -32602, 00:11:20.752 "message": "Invalid MN SPDK_Controller\u001f" 00:11:20.752 }' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:11:20.752 { 00:11:20.752 "nqn": "nqn.2016-06.io.spdk:cnode787", 00:11:20.752 "model_number": "SPDK_Controller\u001f", 00:11:20.752 "method": "nvmf_create_subsystem", 00:11:20.752 "req_id": 1 00:11:20.752 } 00:11:20.752 Got JSON-RPC error response 00:11:20.752 response: 00:11:20.752 { 00:11:20.752 "code": -32602, 00:11:20.752 "message": "Invalid MN SPDK_Controller\u001f" 00:11:20.752 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.752 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 34 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x22' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='"' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ K == \- ]] 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'Kt]s=Wv1APa77n"d'\''vuRB' 00:11:20.753 12:12:13 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'Kt]s=Wv1APa77n"d'\''vuRB' nqn.2016-06.io.spdk:cnode30208 00:11:21.012 [2024-07-26 12:12:14.149326] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode30208: invalid serial number 'Kt]s=Wv1APa77n"d'vuRB' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:11:21.012 { 00:11:21.012 "nqn": "nqn.2016-06.io.spdk:cnode30208", 00:11:21.012 "serial_number": "Kt]s=Wv1APa77n\"d'\''vuRB", 00:11:21.012 "method": "nvmf_create_subsystem", 00:11:21.012 "req_id": 1 00:11:21.012 } 00:11:21.012 Got JSON-RPC error response 00:11:21.012 response: 00:11:21.012 { 00:11:21.012 "code": -32602, 00:11:21.012 "message": "Invalid SN Kt]s=Wv1APa77n\"d'\''vuRB" 00:11:21.012 }' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:11:21.012 { 00:11:21.012 "nqn": "nqn.2016-06.io.spdk:cnode30208", 00:11:21.012 "serial_number": "Kt]s=Wv1APa77n\"d'vuRB", 00:11:21.012 "method": "nvmf_create_subsystem", 00:11:21.012 "req_id": 1 00:11:21.012 } 00:11:21.012 Got JSON-RPC error response 00:11:21.012 response: 00:11:21.012 { 00:11:21.012 "code": -32602, 00:11:21.012 "message": "Invalid SN Kt]s=Wv1APa77n\"d'vuRB" 00:11:21.012 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 71 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x47' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=G 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 121 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x79' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=y 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:11:21.012 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 95 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5f' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=_ 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 102 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x66' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=f 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 58 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3a' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=: 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 44 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2c' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=, 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 57 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x39' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=9 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 105 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x69' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=i 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:11:21.013 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 111 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6f' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=o 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 96 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x60' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='`' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:11:21.272 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 36 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x24' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='$' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 98 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x62' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=b 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 114 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x72' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=r 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 55 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x37' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=7 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ G == \- ]] 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'G%'\''yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R' 00:11:21.273 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'G%'\''yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R' nqn.2016-06.io.spdk:cnode25997 00:11:21.531 [2024-07-26 12:12:14.538578] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25997: invalid model number 'G%'yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R' 00:11:21.531 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:11:21.531 { 00:11:21.531 "nqn": "nqn.2016-06.io.spdk:cnode25997", 00:11:21.531 "model_number": "G%'\''yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R", 00:11:21.531 "method": "nvmf_create_subsystem", 00:11:21.531 "req_id": 1 00:11:21.531 } 00:11:21.531 Got JSON-RPC error response 00:11:21.531 response: 00:11:21.531 { 00:11:21.531 "code": -32602, 00:11:21.531 "message": "Invalid MN G%'\''yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R" 00:11:21.531 }' 00:11:21.531 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:11:21.531 { 00:11:21.531 "nqn": "nqn.2016-06.io.spdk:cnode25997", 00:11:21.531 "model_number": "G%'yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R", 00:11:21.531 "method": "nvmf_create_subsystem", 00:11:21.531 "req_id": 1 00:11:21.531 } 00:11:21.531 Got JSON-RPC error response 00:11:21.531 response: 00:11:21.531 { 00:11:21.531 "code": -32602, 00:11:21.531 "message": "Invalid MN G%'yHd_Pf8LE7&:,w9cib1lNeQ1o&`D$b%q5Azr7R" 00:11:21.531 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:11:21.531 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:11:21.790 [2024-07-26 12:12:14.787513] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:21.790 12:12:14 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:11:22.049 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # [[ tcp == \T\C\P ]] 00:11:22.049 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # echo '' 00:11:22.049 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # head -n 1 00:11:22.049 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@67 -- # IP= 00:11:22.049 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a '' -s 4421 00:11:22.049 [2024-07-26 12:12:15.301169] nvmf_rpc.c: 809:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:11:22.308 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # out='request: 00:11:22.308 { 00:11:22.308 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:22.308 "listen_address": { 00:11:22.308 "trtype": "tcp", 00:11:22.308 "traddr": "", 00:11:22.308 "trsvcid": "4421" 00:11:22.308 }, 00:11:22.308 "method": "nvmf_subsystem_remove_listener", 00:11:22.308 "req_id": 1 00:11:22.308 } 00:11:22.308 Got JSON-RPC error response 00:11:22.308 response: 00:11:22.308 { 00:11:22.308 "code": -32602, 00:11:22.308 "message": "Invalid parameters" 00:11:22.308 }' 00:11:22.308 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # [[ request: 00:11:22.308 { 00:11:22.308 "nqn": "nqn.2016-06.io.spdk:cnode", 00:11:22.308 "listen_address": { 00:11:22.308 "trtype": "tcp", 00:11:22.308 "traddr": "", 00:11:22.308 "trsvcid": "4421" 00:11:22.308 }, 00:11:22.308 "method": "nvmf_subsystem_remove_listener", 00:11:22.308 "req_id": 1 00:11:22.308 } 00:11:22.308 Got JSON-RPC error response 00:11:22.308 response: 00:11:22.308 { 00:11:22.308 "code": -32602, 00:11:22.308 "message": "Invalid parameters" 00:11:22.308 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:11:22.308 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode7668 -i 0 00:11:22.308 [2024-07-26 12:12:15.545874] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode7668: invalid cntlid range [0-65519] 00:11:22.567 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # out='request: 00:11:22.567 { 00:11:22.567 "nqn": "nqn.2016-06.io.spdk:cnode7668", 00:11:22.567 "min_cntlid": 0, 00:11:22.567 "method": "nvmf_create_subsystem", 00:11:22.567 "req_id": 1 00:11:22.567 } 00:11:22.567 Got JSON-RPC error response 00:11:22.567 response: 00:11:22.567 { 00:11:22.567 "code": -32602, 00:11:22.567 "message": "Invalid cntlid range [0-65519]" 00:11:22.568 }' 00:11:22.568 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # [[ request: 00:11:22.568 { 00:11:22.568 "nqn": "nqn.2016-06.io.spdk:cnode7668", 00:11:22.568 "min_cntlid": 0, 00:11:22.568 "method": "nvmf_create_subsystem", 00:11:22.568 "req_id": 1 00:11:22.568 } 00:11:22.568 Got JSON-RPC error response 00:11:22.568 response: 00:11:22.568 { 00:11:22.568 "code": -32602, 00:11:22.568 "message": "Invalid cntlid range [0-65519]" 00:11:22.568 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:22.568 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode13626 -i 65520 00:11:22.568 [2024-07-26 12:12:15.810745] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode13626: invalid cntlid range [65520-65519] 00:11:22.826 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # out='request: 00:11:22.826 { 00:11:22.826 "nqn": "nqn.2016-06.io.spdk:cnode13626", 00:11:22.826 "min_cntlid": 65520, 00:11:22.826 "method": "nvmf_create_subsystem", 00:11:22.826 "req_id": 1 00:11:22.826 } 00:11:22.826 Got JSON-RPC error response 00:11:22.826 response: 00:11:22.826 { 00:11:22.826 "code": -32602, 00:11:22.826 "message": "Invalid cntlid range [65520-65519]" 00:11:22.826 }' 00:11:22.826 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@76 -- # [[ request: 00:11:22.826 { 00:11:22.826 "nqn": "nqn.2016-06.io.spdk:cnode13626", 00:11:22.826 "min_cntlid": 65520, 00:11:22.826 "method": "nvmf_create_subsystem", 00:11:22.826 "req_id": 1 00:11:22.826 } 00:11:22.826 Got JSON-RPC error response 00:11:22.826 response: 00:11:22.826 { 00:11:22.826 "code": -32602, 00:11:22.826 "message": "Invalid cntlid range [65520-65519]" 00:11:22.826 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:22.826 12:12:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode14562 -I 0 00:11:22.827 [2024-07-26 12:12:16.051545] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode14562: invalid cntlid range [1-0] 00:11:22.827 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@77 -- # out='request: 00:11:22.827 { 00:11:22.827 "nqn": "nqn.2016-06.io.spdk:cnode14562", 00:11:22.827 "max_cntlid": 0, 00:11:22.827 "method": "nvmf_create_subsystem", 00:11:22.827 "req_id": 1 00:11:22.827 } 00:11:22.827 Got JSON-RPC error response 00:11:22.827 response: 00:11:22.827 { 00:11:22.827 "code": -32602, 00:11:22.827 "message": "Invalid cntlid range [1-0]" 00:11:22.827 }' 00:11:22.827 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # [[ request: 00:11:22.827 { 00:11:22.827 "nqn": "nqn.2016-06.io.spdk:cnode14562", 00:11:22.827 "max_cntlid": 0, 00:11:22.827 "method": "nvmf_create_subsystem", 00:11:22.827 "req_id": 1 00:11:22.827 } 00:11:22.827 Got JSON-RPC error response 00:11:22.827 response: 00:11:22.827 { 00:11:22.827 "code": -32602, 00:11:22.827 "message": "Invalid cntlid range [1-0]" 00:11:22.827 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:22.827 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24500 -I 65520 00:11:23.086 [2024-07-26 12:12:16.316412] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24500: invalid cntlid range [1-65520] 00:11:23.086 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # out='request: 00:11:23.086 { 00:11:23.086 "nqn": "nqn.2016-06.io.spdk:cnode24500", 00:11:23.086 "max_cntlid": 65520, 00:11:23.086 "method": "nvmf_create_subsystem", 00:11:23.086 "req_id": 1 00:11:23.086 } 00:11:23.086 Got JSON-RPC error response 00:11:23.086 response: 00:11:23.086 { 00:11:23.086 "code": -32602, 00:11:23.086 "message": "Invalid cntlid range [1-65520]" 00:11:23.086 }' 00:11:23.086 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@80 -- # [[ request: 00:11:23.086 { 00:11:23.086 "nqn": "nqn.2016-06.io.spdk:cnode24500", 00:11:23.086 "max_cntlid": 65520, 00:11:23.086 "method": "nvmf_create_subsystem", 00:11:23.086 "req_id": 1 00:11:23.086 } 00:11:23.086 Got JSON-RPC error response 00:11:23.086 response: 00:11:23.086 { 00:11:23.086 "code": -32602, 00:11:23.086 "message": "Invalid cntlid range [1-65520]" 00:11:23.086 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:23.344 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode17284 -i 6 -I 5 00:11:23.344 [2024-07-26 12:12:16.561238] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode17284: invalid cntlid range [6-5] 00:11:23.344 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # out='request: 00:11:23.344 { 00:11:23.344 "nqn": "nqn.2016-06.io.spdk:cnode17284", 00:11:23.344 "min_cntlid": 6, 00:11:23.344 "max_cntlid": 5, 00:11:23.344 "method": "nvmf_create_subsystem", 00:11:23.344 "req_id": 1 00:11:23.344 } 00:11:23.344 Got JSON-RPC error response 00:11:23.344 response: 00:11:23.344 { 00:11:23.344 "code": -32602, 00:11:23.344 "message": "Invalid cntlid range [6-5]" 00:11:23.344 }' 00:11:23.344 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@84 -- # [[ request: 00:11:23.344 { 00:11:23.344 "nqn": "nqn.2016-06.io.spdk:cnode17284", 00:11:23.344 "min_cntlid": 6, 00:11:23.344 "max_cntlid": 5, 00:11:23.344 "method": "nvmf_create_subsystem", 00:11:23.344 "req_id": 1 00:11:23.344 } 00:11:23.344 Got JSON-RPC error response 00:11:23.344 response: 00:11:23.344 { 00:11:23.344 "code": -32602, 00:11:23.344 "message": "Invalid cntlid range [6-5]" 00:11:23.344 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:11:23.344 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@87 -- # out='request: 00:11:23.602 { 00:11:23.602 "name": "foobar", 00:11:23.602 "method": "nvmf_delete_target", 00:11:23.602 "req_id": 1 00:11:23.602 } 00:11:23.602 Got JSON-RPC error response 00:11:23.602 response: 00:11:23.602 { 00:11:23.602 "code": -32602, 00:11:23.602 "message": "The specified target doesn'\''t exist, cannot delete it." 00:11:23.602 }' 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@88 -- # [[ request: 00:11:23.602 { 00:11:23.602 "name": "foobar", 00:11:23.602 "method": "nvmf_delete_target", 00:11:23.602 "req_id": 1 00:11:23.602 } 00:11:23.602 Got JSON-RPC error response 00:11:23.602 response: 00:11:23.602 { 00:11:23.602 "code": -32602, 00:11:23.602 "message": "The specified target doesn't exist, cannot delete it." 00:11:23.602 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@90 -- # trap - SIGINT SIGTERM EXIT 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@91 -- # nvmftestfini 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@117 -- # sync 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:23.602 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@120 -- # set +e 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:23.603 rmmod nvme_tcp 00:11:23.603 rmmod nvme_fabrics 00:11:23.603 rmmod nvme_keyring 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@124 -- # set -e 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # return 0 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@489 -- # '[' -n 2837011 ']' 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@490 -- # killprocess 2837011 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@950 -- # '[' -z 2837011 ']' 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # kill -0 2837011 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # uname 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2837011 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2837011' 00:11:23.603 killing process with pid 2837011 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@969 -- # kill 2837011 00:11:23.603 12:12:16 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@974 -- # wait 2837011 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:23.862 12:12:17 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:26.400 00:11:26.400 real 0m9.136s 00:11:26.400 user 0m22.831s 00:11:26.400 sys 0m2.304s 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:11:26.400 ************************************ 00:11:26.400 END TEST nvmf_invalid 00:11:26.400 ************************************ 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:26.400 ************************************ 00:11:26.400 START TEST nvmf_connect_stress 00:11:26.400 ************************************ 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:11:26.400 * Looking for test storage... 00:11:26.400 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:26.400 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:11:26.401 12:12:19 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:28.310 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:28.310 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:28.310 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:28.310 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:28.310 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:28.311 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:28.311 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:11:28.311 00:11:28.311 --- 10.0.0.2 ping statistics --- 00:11:28.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.311 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:28.311 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:28.311 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.158 ms 00:11:28.311 00:11:28.311 --- 10.0.0.1 ping statistics --- 00:11:28.311 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:28.311 rtt min/avg/max/mdev = 0.158/0.158/0.158/0.000 ms 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=2839655 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 2839655 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@831 -- # '[' -z 2839655 ']' 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:28.311 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:28.311 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.311 [2024-07-26 12:12:21.368932] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:11:28.311 [2024-07-26 12:12:21.369008] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:28.311 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.311 [2024-07-26 12:12:21.435017] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:28.311 [2024-07-26 12:12:21.542554] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:28.311 [2024-07-26 12:12:21.542609] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:28.311 [2024-07-26 12:12:21.542639] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:28.311 [2024-07-26 12:12:21.542650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:28.311 [2024-07-26 12:12:21.542660] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:28.311 [2024-07-26 12:12:21.542751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:28.311 [2024-07-26 12:12:21.542815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:28.311 [2024-07-26 12:12:21.542818] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # return 0 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.573 [2024-07-26 12:12:21.696456] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.573 [2024-07-26 12:12:21.727239] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:28.573 NULL1 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=2839684 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 EAL: No free 2048 kB hugepages reported on node 1 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.573 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:28.574 12:12:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.145 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.146 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:29.146 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.146 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.146 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.405 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.405 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:29.405 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.405 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.405 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.663 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.663 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:29.663 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.663 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.663 12:12:22 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:29.922 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:29.922 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:29.922 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:29.922 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:29.922 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.192 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.192 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:30.192 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.192 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.192 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:30.763 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:30.763 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:30.763 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:30.763 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:30.763 12:12:23 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.021 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.021 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:31.021 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.022 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.022 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.281 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.281 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:31.281 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.282 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.282 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.542 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.542 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:31.542 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.542 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.542 12:12:24 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:31.802 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:31.802 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:31.802 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:31.802 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:31.802 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.371 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.371 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:32.371 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.371 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.371 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.631 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.631 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:32.631 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.631 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.631 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:32.891 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:32.891 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:32.891 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:32.891 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:32.891 12:12:25 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.152 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.152 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:33.152 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.152 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.152 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.412 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.412 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:33.412 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.412 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.412 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:33.980 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:33.980 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:33.980 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:33.980 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:33.980 12:12:26 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.239 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.239 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:34.239 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.239 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.239 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.498 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.498 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:34.498 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.498 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.498 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:34.758 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:34.758 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:34.758 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:34.758 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:34.758 12:12:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.016 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.016 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:35.016 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.016 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.016 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.581 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.581 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:35.581 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.581 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.581 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:35.839 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:35.839 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:35.839 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:35.839 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:35.839 12:12:28 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.117 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.117 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:36.117 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.117 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.117 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.381 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.381 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:36.381 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.381 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.381 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.639 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.639 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:36.639 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.639 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.639 12:12:29 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:36.899 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:36.899 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:36.899 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:36.899 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:36.899 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.467 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.467 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:37.467 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.467 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.467 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.724 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.724 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:37.724 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.724 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.724 12:12:30 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:37.982 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:37.982 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:37.982 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:37.982 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:37.982 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.241 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.241 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:38.241 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.241 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.241 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.500 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:38.500 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:38.500 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:11:38.500 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:38.500 12:12:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:38.759 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 2839684 00:11:39.020 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (2839684) - No such process 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 2839684 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:39.020 rmmod nvme_tcp 00:11:39.020 rmmod nvme_fabrics 00:11:39.020 rmmod nvme_keyring 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 2839655 ']' 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 2839655 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@950 -- # '[' -z 2839655 ']' 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # kill -0 2839655 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # uname 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2839655 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2839655' 00:11:39.020 killing process with pid 2839655 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@969 -- # kill 2839655 00:11:39.020 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@974 -- # wait 2839655 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:39.280 12:12:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:41.817 00:11:41.817 real 0m15.325s 00:11:41.817 user 0m38.382s 00:11:41.817 sys 0m5.953s 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:11:41.817 ************************************ 00:11:41.817 END TEST nvmf_connect_stress 00:11:41.817 ************************************ 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:41.817 ************************************ 00:11:41.817 START TEST nvmf_fused_ordering 00:11:41.817 ************************************ 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:11:41.817 * Looking for test storage... 00:11:41.817 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:41.817 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:11:41.818 12:12:34 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:43.720 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:43.721 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:43.721 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:43.721 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:43.721 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:43.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:43.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.141 ms 00:11:43.721 00:11:43.721 --- 10.0.0.2 ping statistics --- 00:11:43.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.721 rtt min/avg/max/mdev = 0.141/0.141/0.141/0.000 ms 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:43.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:43.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.139 ms 00:11:43.721 00:11:43.721 --- 10.0.0.1 ping statistics --- 00:11:43.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:43.721 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:11:43.721 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=2842957 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 2842957 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@831 -- # '[' -z 2842957 ']' 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:43.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:43.722 12:12:36 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:43.722 [2024-07-26 12:12:36.850508] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:11:43.722 [2024-07-26 12:12:36.850591] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:43.722 EAL: No free 2048 kB hugepages reported on node 1 00:11:43.722 [2024-07-26 12:12:36.915222] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:43.981 [2024-07-26 12:12:37.034079] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:43.981 [2024-07-26 12:12:37.034144] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:43.981 [2024-07-26 12:12:37.034161] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:43.981 [2024-07-26 12:12:37.034174] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:43.981 [2024-07-26 12:12:37.034186] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:43.981 [2024-07-26 12:12:37.034213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:44.548 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:44.548 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # return 0 00:11:44.548 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:44.548 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:44.548 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.808 [2024-07-26 12:12:37.818749] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.808 [2024-07-26 12:12:37.834903] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.808 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.809 NULL1 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@561 -- # xtrace_disable 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:11:44.809 12:12:37 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:44.809 [2024-07-26 12:12:37.880428] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:11:44.809 [2024-07-26 12:12:37.880471] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2843029 ] 00:11:44.809 EAL: No free 2048 kB hugepages reported on node 1 00:11:45.378 Attached to nqn.2016-06.io.spdk:cnode1 00:11:45.378 Namespace ID: 1 size: 1GB 00:11:45.378 fused_ordering(0) 00:11:45.378 fused_ordering(1) 00:11:45.378 fused_ordering(2) 00:11:45.378 fused_ordering(3) 00:11:45.378 fused_ordering(4) 00:11:45.378 fused_ordering(5) 00:11:45.378 fused_ordering(6) 00:11:45.378 fused_ordering(7) 00:11:45.378 fused_ordering(8) 00:11:45.378 fused_ordering(9) 00:11:45.378 fused_ordering(10) 00:11:45.378 fused_ordering(11) 00:11:45.378 fused_ordering(12) 00:11:45.378 fused_ordering(13) 00:11:45.378 fused_ordering(14) 00:11:45.378 fused_ordering(15) 00:11:45.378 fused_ordering(16) 00:11:45.378 fused_ordering(17) 00:11:45.378 fused_ordering(18) 00:11:45.378 fused_ordering(19) 00:11:45.378 fused_ordering(20) 00:11:45.378 fused_ordering(21) 00:11:45.378 fused_ordering(22) 00:11:45.378 fused_ordering(23) 00:11:45.378 fused_ordering(24) 00:11:45.378 fused_ordering(25) 00:11:45.378 fused_ordering(26) 00:11:45.378 fused_ordering(27) 00:11:45.378 fused_ordering(28) 00:11:45.378 fused_ordering(29) 00:11:45.378 fused_ordering(30) 00:11:45.378 fused_ordering(31) 00:11:45.378 fused_ordering(32) 00:11:45.378 fused_ordering(33) 00:11:45.378 fused_ordering(34) 00:11:45.378 fused_ordering(35) 00:11:45.378 fused_ordering(36) 00:11:45.378 fused_ordering(37) 00:11:45.378 fused_ordering(38) 00:11:45.378 fused_ordering(39) 00:11:45.378 fused_ordering(40) 00:11:45.378 fused_ordering(41) 00:11:45.378 fused_ordering(42) 00:11:45.378 fused_ordering(43) 00:11:45.378 fused_ordering(44) 00:11:45.378 fused_ordering(45) 00:11:45.378 fused_ordering(46) 00:11:45.378 fused_ordering(47) 00:11:45.378 fused_ordering(48) 00:11:45.378 fused_ordering(49) 00:11:45.378 fused_ordering(50) 00:11:45.378 fused_ordering(51) 00:11:45.378 fused_ordering(52) 00:11:45.378 fused_ordering(53) 00:11:45.378 fused_ordering(54) 00:11:45.378 fused_ordering(55) 00:11:45.378 fused_ordering(56) 00:11:45.378 fused_ordering(57) 00:11:45.378 fused_ordering(58) 00:11:45.378 fused_ordering(59) 00:11:45.378 fused_ordering(60) 00:11:45.378 fused_ordering(61) 00:11:45.378 fused_ordering(62) 00:11:45.378 fused_ordering(63) 00:11:45.378 fused_ordering(64) 00:11:45.378 fused_ordering(65) 00:11:45.378 fused_ordering(66) 00:11:45.378 fused_ordering(67) 00:11:45.378 fused_ordering(68) 00:11:45.378 fused_ordering(69) 00:11:45.378 fused_ordering(70) 00:11:45.378 fused_ordering(71) 00:11:45.378 fused_ordering(72) 00:11:45.378 fused_ordering(73) 00:11:45.378 fused_ordering(74) 00:11:45.378 fused_ordering(75) 00:11:45.378 fused_ordering(76) 00:11:45.378 fused_ordering(77) 00:11:45.378 fused_ordering(78) 00:11:45.378 fused_ordering(79) 00:11:45.378 fused_ordering(80) 00:11:45.378 fused_ordering(81) 00:11:45.378 fused_ordering(82) 00:11:45.378 fused_ordering(83) 00:11:45.378 fused_ordering(84) 00:11:45.378 fused_ordering(85) 00:11:45.378 fused_ordering(86) 00:11:45.378 fused_ordering(87) 00:11:45.378 fused_ordering(88) 00:11:45.378 fused_ordering(89) 00:11:45.378 fused_ordering(90) 00:11:45.378 fused_ordering(91) 00:11:45.378 fused_ordering(92) 00:11:45.378 fused_ordering(93) 00:11:45.378 fused_ordering(94) 00:11:45.378 fused_ordering(95) 00:11:45.378 fused_ordering(96) 00:11:45.378 fused_ordering(97) 00:11:45.378 fused_ordering(98) 00:11:45.378 fused_ordering(99) 00:11:45.378 fused_ordering(100) 00:11:45.378 fused_ordering(101) 00:11:45.378 fused_ordering(102) 00:11:45.378 fused_ordering(103) 00:11:45.378 fused_ordering(104) 00:11:45.378 fused_ordering(105) 00:11:45.378 fused_ordering(106) 00:11:45.378 fused_ordering(107) 00:11:45.378 fused_ordering(108) 00:11:45.378 fused_ordering(109) 00:11:45.378 fused_ordering(110) 00:11:45.378 fused_ordering(111) 00:11:45.378 fused_ordering(112) 00:11:45.378 fused_ordering(113) 00:11:45.378 fused_ordering(114) 00:11:45.378 fused_ordering(115) 00:11:45.378 fused_ordering(116) 00:11:45.378 fused_ordering(117) 00:11:45.378 fused_ordering(118) 00:11:45.378 fused_ordering(119) 00:11:45.378 fused_ordering(120) 00:11:45.378 fused_ordering(121) 00:11:45.378 fused_ordering(122) 00:11:45.378 fused_ordering(123) 00:11:45.378 fused_ordering(124) 00:11:45.378 fused_ordering(125) 00:11:45.378 fused_ordering(126) 00:11:45.378 fused_ordering(127) 00:11:45.378 fused_ordering(128) 00:11:45.378 fused_ordering(129) 00:11:45.378 fused_ordering(130) 00:11:45.378 fused_ordering(131) 00:11:45.378 fused_ordering(132) 00:11:45.378 fused_ordering(133) 00:11:45.378 fused_ordering(134) 00:11:45.378 fused_ordering(135) 00:11:45.378 fused_ordering(136) 00:11:45.378 fused_ordering(137) 00:11:45.378 fused_ordering(138) 00:11:45.378 fused_ordering(139) 00:11:45.378 fused_ordering(140) 00:11:45.378 fused_ordering(141) 00:11:45.378 fused_ordering(142) 00:11:45.378 fused_ordering(143) 00:11:45.378 fused_ordering(144) 00:11:45.378 fused_ordering(145) 00:11:45.378 fused_ordering(146) 00:11:45.379 fused_ordering(147) 00:11:45.379 fused_ordering(148) 00:11:45.379 fused_ordering(149) 00:11:45.379 fused_ordering(150) 00:11:45.379 fused_ordering(151) 00:11:45.379 fused_ordering(152) 00:11:45.379 fused_ordering(153) 00:11:45.379 fused_ordering(154) 00:11:45.379 fused_ordering(155) 00:11:45.379 fused_ordering(156) 00:11:45.379 fused_ordering(157) 00:11:45.379 fused_ordering(158) 00:11:45.379 fused_ordering(159) 00:11:45.379 fused_ordering(160) 00:11:45.379 fused_ordering(161) 00:11:45.379 fused_ordering(162) 00:11:45.379 fused_ordering(163) 00:11:45.379 fused_ordering(164) 00:11:45.379 fused_ordering(165) 00:11:45.379 fused_ordering(166) 00:11:45.379 fused_ordering(167) 00:11:45.379 fused_ordering(168) 00:11:45.379 fused_ordering(169) 00:11:45.379 fused_ordering(170) 00:11:45.379 fused_ordering(171) 00:11:45.379 fused_ordering(172) 00:11:45.379 fused_ordering(173) 00:11:45.379 fused_ordering(174) 00:11:45.379 fused_ordering(175) 00:11:45.379 fused_ordering(176) 00:11:45.379 fused_ordering(177) 00:11:45.379 fused_ordering(178) 00:11:45.379 fused_ordering(179) 00:11:45.379 fused_ordering(180) 00:11:45.379 fused_ordering(181) 00:11:45.379 fused_ordering(182) 00:11:45.379 fused_ordering(183) 00:11:45.379 fused_ordering(184) 00:11:45.379 fused_ordering(185) 00:11:45.379 fused_ordering(186) 00:11:45.379 fused_ordering(187) 00:11:45.379 fused_ordering(188) 00:11:45.379 fused_ordering(189) 00:11:45.379 fused_ordering(190) 00:11:45.379 fused_ordering(191) 00:11:45.379 fused_ordering(192) 00:11:45.379 fused_ordering(193) 00:11:45.379 fused_ordering(194) 00:11:45.379 fused_ordering(195) 00:11:45.379 fused_ordering(196) 00:11:45.379 fused_ordering(197) 00:11:45.379 fused_ordering(198) 00:11:45.379 fused_ordering(199) 00:11:45.379 fused_ordering(200) 00:11:45.379 fused_ordering(201) 00:11:45.379 fused_ordering(202) 00:11:45.379 fused_ordering(203) 00:11:45.379 fused_ordering(204) 00:11:45.379 fused_ordering(205) 00:11:45.639 fused_ordering(206) 00:11:45.639 fused_ordering(207) 00:11:45.639 fused_ordering(208) 00:11:45.639 fused_ordering(209) 00:11:45.639 fused_ordering(210) 00:11:45.639 fused_ordering(211) 00:11:45.639 fused_ordering(212) 00:11:45.639 fused_ordering(213) 00:11:45.639 fused_ordering(214) 00:11:45.639 fused_ordering(215) 00:11:45.639 fused_ordering(216) 00:11:45.639 fused_ordering(217) 00:11:45.639 fused_ordering(218) 00:11:45.639 fused_ordering(219) 00:11:45.639 fused_ordering(220) 00:11:45.639 fused_ordering(221) 00:11:45.639 fused_ordering(222) 00:11:45.639 fused_ordering(223) 00:11:45.639 fused_ordering(224) 00:11:45.639 fused_ordering(225) 00:11:45.639 fused_ordering(226) 00:11:45.639 fused_ordering(227) 00:11:45.639 fused_ordering(228) 00:11:45.639 fused_ordering(229) 00:11:45.639 fused_ordering(230) 00:11:45.639 fused_ordering(231) 00:11:45.639 fused_ordering(232) 00:11:45.639 fused_ordering(233) 00:11:45.639 fused_ordering(234) 00:11:45.639 fused_ordering(235) 00:11:45.639 fused_ordering(236) 00:11:45.639 fused_ordering(237) 00:11:45.639 fused_ordering(238) 00:11:45.639 fused_ordering(239) 00:11:45.639 fused_ordering(240) 00:11:45.639 fused_ordering(241) 00:11:45.639 fused_ordering(242) 00:11:45.639 fused_ordering(243) 00:11:45.639 fused_ordering(244) 00:11:45.639 fused_ordering(245) 00:11:45.639 fused_ordering(246) 00:11:45.639 fused_ordering(247) 00:11:45.639 fused_ordering(248) 00:11:45.639 fused_ordering(249) 00:11:45.639 fused_ordering(250) 00:11:45.639 fused_ordering(251) 00:11:45.639 fused_ordering(252) 00:11:45.639 fused_ordering(253) 00:11:45.639 fused_ordering(254) 00:11:45.639 fused_ordering(255) 00:11:45.639 fused_ordering(256) 00:11:45.639 fused_ordering(257) 00:11:45.639 fused_ordering(258) 00:11:45.639 fused_ordering(259) 00:11:45.639 fused_ordering(260) 00:11:45.639 fused_ordering(261) 00:11:45.639 fused_ordering(262) 00:11:45.639 fused_ordering(263) 00:11:45.640 fused_ordering(264) 00:11:45.640 fused_ordering(265) 00:11:45.640 fused_ordering(266) 00:11:45.640 fused_ordering(267) 00:11:45.640 fused_ordering(268) 00:11:45.640 fused_ordering(269) 00:11:45.640 fused_ordering(270) 00:11:45.640 fused_ordering(271) 00:11:45.640 fused_ordering(272) 00:11:45.640 fused_ordering(273) 00:11:45.640 fused_ordering(274) 00:11:45.640 fused_ordering(275) 00:11:45.640 fused_ordering(276) 00:11:45.640 fused_ordering(277) 00:11:45.640 fused_ordering(278) 00:11:45.640 fused_ordering(279) 00:11:45.640 fused_ordering(280) 00:11:45.640 fused_ordering(281) 00:11:45.640 fused_ordering(282) 00:11:45.640 fused_ordering(283) 00:11:45.640 fused_ordering(284) 00:11:45.640 fused_ordering(285) 00:11:45.640 fused_ordering(286) 00:11:45.640 fused_ordering(287) 00:11:45.640 fused_ordering(288) 00:11:45.640 fused_ordering(289) 00:11:45.640 fused_ordering(290) 00:11:45.640 fused_ordering(291) 00:11:45.640 fused_ordering(292) 00:11:45.640 fused_ordering(293) 00:11:45.640 fused_ordering(294) 00:11:45.640 fused_ordering(295) 00:11:45.640 fused_ordering(296) 00:11:45.640 fused_ordering(297) 00:11:45.640 fused_ordering(298) 00:11:45.640 fused_ordering(299) 00:11:45.640 fused_ordering(300) 00:11:45.640 fused_ordering(301) 00:11:45.640 fused_ordering(302) 00:11:45.640 fused_ordering(303) 00:11:45.640 fused_ordering(304) 00:11:45.640 fused_ordering(305) 00:11:45.640 fused_ordering(306) 00:11:45.640 fused_ordering(307) 00:11:45.640 fused_ordering(308) 00:11:45.640 fused_ordering(309) 00:11:45.640 fused_ordering(310) 00:11:45.640 fused_ordering(311) 00:11:45.640 fused_ordering(312) 00:11:45.640 fused_ordering(313) 00:11:45.640 fused_ordering(314) 00:11:45.640 fused_ordering(315) 00:11:45.640 fused_ordering(316) 00:11:45.640 fused_ordering(317) 00:11:45.640 fused_ordering(318) 00:11:45.640 fused_ordering(319) 00:11:45.640 fused_ordering(320) 00:11:45.640 fused_ordering(321) 00:11:45.640 fused_ordering(322) 00:11:45.640 fused_ordering(323) 00:11:45.640 fused_ordering(324) 00:11:45.640 fused_ordering(325) 00:11:45.640 fused_ordering(326) 00:11:45.640 fused_ordering(327) 00:11:45.640 fused_ordering(328) 00:11:45.640 fused_ordering(329) 00:11:45.640 fused_ordering(330) 00:11:45.640 fused_ordering(331) 00:11:45.640 fused_ordering(332) 00:11:45.640 fused_ordering(333) 00:11:45.640 fused_ordering(334) 00:11:45.640 fused_ordering(335) 00:11:45.640 fused_ordering(336) 00:11:45.640 fused_ordering(337) 00:11:45.640 fused_ordering(338) 00:11:45.640 fused_ordering(339) 00:11:45.640 fused_ordering(340) 00:11:45.640 fused_ordering(341) 00:11:45.640 fused_ordering(342) 00:11:45.640 fused_ordering(343) 00:11:45.640 fused_ordering(344) 00:11:45.640 fused_ordering(345) 00:11:45.640 fused_ordering(346) 00:11:45.640 fused_ordering(347) 00:11:45.640 fused_ordering(348) 00:11:45.640 fused_ordering(349) 00:11:45.640 fused_ordering(350) 00:11:45.640 fused_ordering(351) 00:11:45.640 fused_ordering(352) 00:11:45.640 fused_ordering(353) 00:11:45.640 fused_ordering(354) 00:11:45.640 fused_ordering(355) 00:11:45.640 fused_ordering(356) 00:11:45.640 fused_ordering(357) 00:11:45.640 fused_ordering(358) 00:11:45.640 fused_ordering(359) 00:11:45.640 fused_ordering(360) 00:11:45.640 fused_ordering(361) 00:11:45.640 fused_ordering(362) 00:11:45.640 fused_ordering(363) 00:11:45.640 fused_ordering(364) 00:11:45.640 fused_ordering(365) 00:11:45.640 fused_ordering(366) 00:11:45.640 fused_ordering(367) 00:11:45.640 fused_ordering(368) 00:11:45.640 fused_ordering(369) 00:11:45.640 fused_ordering(370) 00:11:45.640 fused_ordering(371) 00:11:45.640 fused_ordering(372) 00:11:45.640 fused_ordering(373) 00:11:45.640 fused_ordering(374) 00:11:45.640 fused_ordering(375) 00:11:45.640 fused_ordering(376) 00:11:45.640 fused_ordering(377) 00:11:45.640 fused_ordering(378) 00:11:45.640 fused_ordering(379) 00:11:45.640 fused_ordering(380) 00:11:45.640 fused_ordering(381) 00:11:45.640 fused_ordering(382) 00:11:45.640 fused_ordering(383) 00:11:45.640 fused_ordering(384) 00:11:45.640 fused_ordering(385) 00:11:45.640 fused_ordering(386) 00:11:45.640 fused_ordering(387) 00:11:45.640 fused_ordering(388) 00:11:45.640 fused_ordering(389) 00:11:45.640 fused_ordering(390) 00:11:45.640 fused_ordering(391) 00:11:45.640 fused_ordering(392) 00:11:45.640 fused_ordering(393) 00:11:45.640 fused_ordering(394) 00:11:45.640 fused_ordering(395) 00:11:45.640 fused_ordering(396) 00:11:45.640 fused_ordering(397) 00:11:45.640 fused_ordering(398) 00:11:45.640 fused_ordering(399) 00:11:45.640 fused_ordering(400) 00:11:45.640 fused_ordering(401) 00:11:45.640 fused_ordering(402) 00:11:45.640 fused_ordering(403) 00:11:45.640 fused_ordering(404) 00:11:45.640 fused_ordering(405) 00:11:45.640 fused_ordering(406) 00:11:45.640 fused_ordering(407) 00:11:45.640 fused_ordering(408) 00:11:45.640 fused_ordering(409) 00:11:45.640 fused_ordering(410) 00:11:46.210 fused_ordering(411) 00:11:46.210 fused_ordering(412) 00:11:46.210 fused_ordering(413) 00:11:46.210 fused_ordering(414) 00:11:46.210 fused_ordering(415) 00:11:46.210 fused_ordering(416) 00:11:46.210 fused_ordering(417) 00:11:46.210 fused_ordering(418) 00:11:46.211 fused_ordering(419) 00:11:46.211 fused_ordering(420) 00:11:46.211 fused_ordering(421) 00:11:46.211 fused_ordering(422) 00:11:46.211 fused_ordering(423) 00:11:46.211 fused_ordering(424) 00:11:46.211 fused_ordering(425) 00:11:46.211 fused_ordering(426) 00:11:46.211 fused_ordering(427) 00:11:46.211 fused_ordering(428) 00:11:46.211 fused_ordering(429) 00:11:46.211 fused_ordering(430) 00:11:46.211 fused_ordering(431) 00:11:46.211 fused_ordering(432) 00:11:46.211 fused_ordering(433) 00:11:46.211 fused_ordering(434) 00:11:46.211 fused_ordering(435) 00:11:46.211 fused_ordering(436) 00:11:46.211 fused_ordering(437) 00:11:46.211 fused_ordering(438) 00:11:46.211 fused_ordering(439) 00:11:46.211 fused_ordering(440) 00:11:46.211 fused_ordering(441) 00:11:46.211 fused_ordering(442) 00:11:46.211 fused_ordering(443) 00:11:46.211 fused_ordering(444) 00:11:46.211 fused_ordering(445) 00:11:46.211 fused_ordering(446) 00:11:46.211 fused_ordering(447) 00:11:46.211 fused_ordering(448) 00:11:46.211 fused_ordering(449) 00:11:46.211 fused_ordering(450) 00:11:46.211 fused_ordering(451) 00:11:46.211 fused_ordering(452) 00:11:46.211 fused_ordering(453) 00:11:46.211 fused_ordering(454) 00:11:46.211 fused_ordering(455) 00:11:46.211 fused_ordering(456) 00:11:46.211 fused_ordering(457) 00:11:46.211 fused_ordering(458) 00:11:46.211 fused_ordering(459) 00:11:46.211 fused_ordering(460) 00:11:46.211 fused_ordering(461) 00:11:46.211 fused_ordering(462) 00:11:46.211 fused_ordering(463) 00:11:46.211 fused_ordering(464) 00:11:46.211 fused_ordering(465) 00:11:46.211 fused_ordering(466) 00:11:46.211 fused_ordering(467) 00:11:46.211 fused_ordering(468) 00:11:46.211 fused_ordering(469) 00:11:46.211 fused_ordering(470) 00:11:46.211 fused_ordering(471) 00:11:46.211 fused_ordering(472) 00:11:46.211 fused_ordering(473) 00:11:46.211 fused_ordering(474) 00:11:46.211 fused_ordering(475) 00:11:46.211 fused_ordering(476) 00:11:46.211 fused_ordering(477) 00:11:46.211 fused_ordering(478) 00:11:46.211 fused_ordering(479) 00:11:46.211 fused_ordering(480) 00:11:46.211 fused_ordering(481) 00:11:46.211 fused_ordering(482) 00:11:46.211 fused_ordering(483) 00:11:46.211 fused_ordering(484) 00:11:46.211 fused_ordering(485) 00:11:46.211 fused_ordering(486) 00:11:46.211 fused_ordering(487) 00:11:46.211 fused_ordering(488) 00:11:46.211 fused_ordering(489) 00:11:46.211 fused_ordering(490) 00:11:46.211 fused_ordering(491) 00:11:46.211 fused_ordering(492) 00:11:46.211 fused_ordering(493) 00:11:46.211 fused_ordering(494) 00:11:46.211 fused_ordering(495) 00:11:46.211 fused_ordering(496) 00:11:46.211 fused_ordering(497) 00:11:46.211 fused_ordering(498) 00:11:46.211 fused_ordering(499) 00:11:46.211 fused_ordering(500) 00:11:46.211 fused_ordering(501) 00:11:46.211 fused_ordering(502) 00:11:46.211 fused_ordering(503) 00:11:46.211 fused_ordering(504) 00:11:46.211 fused_ordering(505) 00:11:46.211 fused_ordering(506) 00:11:46.211 fused_ordering(507) 00:11:46.211 fused_ordering(508) 00:11:46.211 fused_ordering(509) 00:11:46.211 fused_ordering(510) 00:11:46.211 fused_ordering(511) 00:11:46.211 fused_ordering(512) 00:11:46.211 fused_ordering(513) 00:11:46.211 fused_ordering(514) 00:11:46.211 fused_ordering(515) 00:11:46.211 fused_ordering(516) 00:11:46.211 fused_ordering(517) 00:11:46.211 fused_ordering(518) 00:11:46.211 fused_ordering(519) 00:11:46.211 fused_ordering(520) 00:11:46.211 fused_ordering(521) 00:11:46.211 fused_ordering(522) 00:11:46.211 fused_ordering(523) 00:11:46.211 fused_ordering(524) 00:11:46.211 fused_ordering(525) 00:11:46.211 fused_ordering(526) 00:11:46.211 fused_ordering(527) 00:11:46.211 fused_ordering(528) 00:11:46.211 fused_ordering(529) 00:11:46.211 fused_ordering(530) 00:11:46.211 fused_ordering(531) 00:11:46.211 fused_ordering(532) 00:11:46.211 fused_ordering(533) 00:11:46.211 fused_ordering(534) 00:11:46.211 fused_ordering(535) 00:11:46.211 fused_ordering(536) 00:11:46.211 fused_ordering(537) 00:11:46.211 fused_ordering(538) 00:11:46.211 fused_ordering(539) 00:11:46.211 fused_ordering(540) 00:11:46.211 fused_ordering(541) 00:11:46.211 fused_ordering(542) 00:11:46.211 fused_ordering(543) 00:11:46.211 fused_ordering(544) 00:11:46.211 fused_ordering(545) 00:11:46.211 fused_ordering(546) 00:11:46.211 fused_ordering(547) 00:11:46.211 fused_ordering(548) 00:11:46.211 fused_ordering(549) 00:11:46.211 fused_ordering(550) 00:11:46.211 fused_ordering(551) 00:11:46.211 fused_ordering(552) 00:11:46.211 fused_ordering(553) 00:11:46.211 fused_ordering(554) 00:11:46.211 fused_ordering(555) 00:11:46.211 fused_ordering(556) 00:11:46.211 fused_ordering(557) 00:11:46.211 fused_ordering(558) 00:11:46.211 fused_ordering(559) 00:11:46.211 fused_ordering(560) 00:11:46.211 fused_ordering(561) 00:11:46.211 fused_ordering(562) 00:11:46.211 fused_ordering(563) 00:11:46.211 fused_ordering(564) 00:11:46.211 fused_ordering(565) 00:11:46.211 fused_ordering(566) 00:11:46.211 fused_ordering(567) 00:11:46.211 fused_ordering(568) 00:11:46.211 fused_ordering(569) 00:11:46.211 fused_ordering(570) 00:11:46.211 fused_ordering(571) 00:11:46.211 fused_ordering(572) 00:11:46.211 fused_ordering(573) 00:11:46.211 fused_ordering(574) 00:11:46.211 fused_ordering(575) 00:11:46.211 fused_ordering(576) 00:11:46.211 fused_ordering(577) 00:11:46.211 fused_ordering(578) 00:11:46.211 fused_ordering(579) 00:11:46.211 fused_ordering(580) 00:11:46.211 fused_ordering(581) 00:11:46.211 fused_ordering(582) 00:11:46.211 fused_ordering(583) 00:11:46.211 fused_ordering(584) 00:11:46.211 fused_ordering(585) 00:11:46.211 fused_ordering(586) 00:11:46.211 fused_ordering(587) 00:11:46.211 fused_ordering(588) 00:11:46.211 fused_ordering(589) 00:11:46.211 fused_ordering(590) 00:11:46.211 fused_ordering(591) 00:11:46.211 fused_ordering(592) 00:11:46.211 fused_ordering(593) 00:11:46.211 fused_ordering(594) 00:11:46.211 fused_ordering(595) 00:11:46.211 fused_ordering(596) 00:11:46.211 fused_ordering(597) 00:11:46.211 fused_ordering(598) 00:11:46.211 fused_ordering(599) 00:11:46.211 fused_ordering(600) 00:11:46.211 fused_ordering(601) 00:11:46.211 fused_ordering(602) 00:11:46.211 fused_ordering(603) 00:11:46.211 fused_ordering(604) 00:11:46.211 fused_ordering(605) 00:11:46.211 fused_ordering(606) 00:11:46.211 fused_ordering(607) 00:11:46.211 fused_ordering(608) 00:11:46.211 fused_ordering(609) 00:11:46.211 fused_ordering(610) 00:11:46.211 fused_ordering(611) 00:11:46.211 fused_ordering(612) 00:11:46.211 fused_ordering(613) 00:11:46.211 fused_ordering(614) 00:11:46.211 fused_ordering(615) 00:11:47.151 fused_ordering(616) 00:11:47.151 fused_ordering(617) 00:11:47.151 fused_ordering(618) 00:11:47.151 fused_ordering(619) 00:11:47.151 fused_ordering(620) 00:11:47.151 fused_ordering(621) 00:11:47.151 fused_ordering(622) 00:11:47.151 fused_ordering(623) 00:11:47.151 fused_ordering(624) 00:11:47.151 fused_ordering(625) 00:11:47.151 fused_ordering(626) 00:11:47.151 fused_ordering(627) 00:11:47.151 fused_ordering(628) 00:11:47.151 fused_ordering(629) 00:11:47.151 fused_ordering(630) 00:11:47.151 fused_ordering(631) 00:11:47.151 fused_ordering(632) 00:11:47.151 fused_ordering(633) 00:11:47.151 fused_ordering(634) 00:11:47.151 fused_ordering(635) 00:11:47.151 fused_ordering(636) 00:11:47.151 fused_ordering(637) 00:11:47.151 fused_ordering(638) 00:11:47.151 fused_ordering(639) 00:11:47.151 fused_ordering(640) 00:11:47.151 fused_ordering(641) 00:11:47.151 fused_ordering(642) 00:11:47.151 fused_ordering(643) 00:11:47.151 fused_ordering(644) 00:11:47.151 fused_ordering(645) 00:11:47.151 fused_ordering(646) 00:11:47.151 fused_ordering(647) 00:11:47.151 fused_ordering(648) 00:11:47.151 fused_ordering(649) 00:11:47.151 fused_ordering(650) 00:11:47.151 fused_ordering(651) 00:11:47.151 fused_ordering(652) 00:11:47.151 fused_ordering(653) 00:11:47.151 fused_ordering(654) 00:11:47.151 fused_ordering(655) 00:11:47.151 fused_ordering(656) 00:11:47.151 fused_ordering(657) 00:11:47.151 fused_ordering(658) 00:11:47.151 fused_ordering(659) 00:11:47.151 fused_ordering(660) 00:11:47.151 fused_ordering(661) 00:11:47.151 fused_ordering(662) 00:11:47.151 fused_ordering(663) 00:11:47.151 fused_ordering(664) 00:11:47.151 fused_ordering(665) 00:11:47.151 fused_ordering(666) 00:11:47.151 fused_ordering(667) 00:11:47.151 fused_ordering(668) 00:11:47.151 fused_ordering(669) 00:11:47.151 fused_ordering(670) 00:11:47.151 fused_ordering(671) 00:11:47.151 fused_ordering(672) 00:11:47.151 fused_ordering(673) 00:11:47.151 fused_ordering(674) 00:11:47.151 fused_ordering(675) 00:11:47.151 fused_ordering(676) 00:11:47.151 fused_ordering(677) 00:11:47.151 fused_ordering(678) 00:11:47.151 fused_ordering(679) 00:11:47.151 fused_ordering(680) 00:11:47.151 fused_ordering(681) 00:11:47.151 fused_ordering(682) 00:11:47.151 fused_ordering(683) 00:11:47.151 fused_ordering(684) 00:11:47.151 fused_ordering(685) 00:11:47.151 fused_ordering(686) 00:11:47.151 fused_ordering(687) 00:11:47.151 fused_ordering(688) 00:11:47.151 fused_ordering(689) 00:11:47.151 fused_ordering(690) 00:11:47.151 fused_ordering(691) 00:11:47.151 fused_ordering(692) 00:11:47.151 fused_ordering(693) 00:11:47.151 fused_ordering(694) 00:11:47.151 fused_ordering(695) 00:11:47.151 fused_ordering(696) 00:11:47.151 fused_ordering(697) 00:11:47.151 fused_ordering(698) 00:11:47.151 fused_ordering(699) 00:11:47.151 fused_ordering(700) 00:11:47.151 fused_ordering(701) 00:11:47.151 fused_ordering(702) 00:11:47.151 fused_ordering(703) 00:11:47.151 fused_ordering(704) 00:11:47.151 fused_ordering(705) 00:11:47.151 fused_ordering(706) 00:11:47.151 fused_ordering(707) 00:11:47.151 fused_ordering(708) 00:11:47.151 fused_ordering(709) 00:11:47.151 fused_ordering(710) 00:11:47.151 fused_ordering(711) 00:11:47.151 fused_ordering(712) 00:11:47.151 fused_ordering(713) 00:11:47.151 fused_ordering(714) 00:11:47.151 fused_ordering(715) 00:11:47.151 fused_ordering(716) 00:11:47.151 fused_ordering(717) 00:11:47.151 fused_ordering(718) 00:11:47.151 fused_ordering(719) 00:11:47.151 fused_ordering(720) 00:11:47.151 fused_ordering(721) 00:11:47.151 fused_ordering(722) 00:11:47.151 fused_ordering(723) 00:11:47.152 fused_ordering(724) 00:11:47.152 fused_ordering(725) 00:11:47.152 fused_ordering(726) 00:11:47.152 fused_ordering(727) 00:11:47.152 fused_ordering(728) 00:11:47.152 fused_ordering(729) 00:11:47.152 fused_ordering(730) 00:11:47.152 fused_ordering(731) 00:11:47.152 fused_ordering(732) 00:11:47.152 fused_ordering(733) 00:11:47.152 fused_ordering(734) 00:11:47.152 fused_ordering(735) 00:11:47.152 fused_ordering(736) 00:11:47.152 fused_ordering(737) 00:11:47.152 fused_ordering(738) 00:11:47.152 fused_ordering(739) 00:11:47.152 fused_ordering(740) 00:11:47.152 fused_ordering(741) 00:11:47.152 fused_ordering(742) 00:11:47.152 fused_ordering(743) 00:11:47.152 fused_ordering(744) 00:11:47.152 fused_ordering(745) 00:11:47.152 fused_ordering(746) 00:11:47.152 fused_ordering(747) 00:11:47.152 fused_ordering(748) 00:11:47.152 fused_ordering(749) 00:11:47.152 fused_ordering(750) 00:11:47.152 fused_ordering(751) 00:11:47.152 fused_ordering(752) 00:11:47.152 fused_ordering(753) 00:11:47.152 fused_ordering(754) 00:11:47.152 fused_ordering(755) 00:11:47.152 fused_ordering(756) 00:11:47.152 fused_ordering(757) 00:11:47.152 fused_ordering(758) 00:11:47.152 fused_ordering(759) 00:11:47.152 fused_ordering(760) 00:11:47.152 fused_ordering(761) 00:11:47.152 fused_ordering(762) 00:11:47.152 fused_ordering(763) 00:11:47.152 fused_ordering(764) 00:11:47.152 fused_ordering(765) 00:11:47.152 fused_ordering(766) 00:11:47.152 fused_ordering(767) 00:11:47.152 fused_ordering(768) 00:11:47.152 fused_ordering(769) 00:11:47.152 fused_ordering(770) 00:11:47.152 fused_ordering(771) 00:11:47.152 fused_ordering(772) 00:11:47.152 fused_ordering(773) 00:11:47.152 fused_ordering(774) 00:11:47.152 fused_ordering(775) 00:11:47.152 fused_ordering(776) 00:11:47.152 fused_ordering(777) 00:11:47.152 fused_ordering(778) 00:11:47.152 fused_ordering(779) 00:11:47.152 fused_ordering(780) 00:11:47.152 fused_ordering(781) 00:11:47.152 fused_ordering(782) 00:11:47.152 fused_ordering(783) 00:11:47.152 fused_ordering(784) 00:11:47.152 fused_ordering(785) 00:11:47.152 fused_ordering(786) 00:11:47.152 fused_ordering(787) 00:11:47.152 fused_ordering(788) 00:11:47.152 fused_ordering(789) 00:11:47.152 fused_ordering(790) 00:11:47.152 fused_ordering(791) 00:11:47.152 fused_ordering(792) 00:11:47.152 fused_ordering(793) 00:11:47.152 fused_ordering(794) 00:11:47.152 fused_ordering(795) 00:11:47.152 fused_ordering(796) 00:11:47.152 fused_ordering(797) 00:11:47.152 fused_ordering(798) 00:11:47.152 fused_ordering(799) 00:11:47.152 fused_ordering(800) 00:11:47.152 fused_ordering(801) 00:11:47.152 fused_ordering(802) 00:11:47.152 fused_ordering(803) 00:11:47.152 fused_ordering(804) 00:11:47.152 fused_ordering(805) 00:11:47.152 fused_ordering(806) 00:11:47.152 fused_ordering(807) 00:11:47.152 fused_ordering(808) 00:11:47.152 fused_ordering(809) 00:11:47.152 fused_ordering(810) 00:11:47.152 fused_ordering(811) 00:11:47.152 fused_ordering(812) 00:11:47.152 fused_ordering(813) 00:11:47.152 fused_ordering(814) 00:11:47.152 fused_ordering(815) 00:11:47.152 fused_ordering(816) 00:11:47.152 fused_ordering(817) 00:11:47.152 fused_ordering(818) 00:11:47.152 fused_ordering(819) 00:11:47.152 fused_ordering(820) 00:11:47.720 fused_ordering(821) 00:11:47.720 fused_ordering(822) 00:11:47.720 fused_ordering(823) 00:11:47.721 fused_ordering(824) 00:11:47.721 fused_ordering(825) 00:11:47.721 fused_ordering(826) 00:11:47.721 fused_ordering(827) 00:11:47.721 fused_ordering(828) 00:11:47.721 fused_ordering(829) 00:11:47.721 fused_ordering(830) 00:11:47.721 fused_ordering(831) 00:11:47.721 fused_ordering(832) 00:11:47.721 fused_ordering(833) 00:11:47.721 fused_ordering(834) 00:11:47.721 fused_ordering(835) 00:11:47.721 fused_ordering(836) 00:11:47.721 fused_ordering(837) 00:11:47.721 fused_ordering(838) 00:11:47.721 fused_ordering(839) 00:11:47.721 fused_ordering(840) 00:11:47.721 fused_ordering(841) 00:11:47.721 fused_ordering(842) 00:11:47.721 fused_ordering(843) 00:11:47.721 fused_ordering(844) 00:11:47.721 fused_ordering(845) 00:11:47.721 fused_ordering(846) 00:11:47.721 fused_ordering(847) 00:11:47.721 fused_ordering(848) 00:11:47.721 fused_ordering(849) 00:11:47.721 fused_ordering(850) 00:11:47.721 fused_ordering(851) 00:11:47.721 fused_ordering(852) 00:11:47.721 fused_ordering(853) 00:11:47.721 fused_ordering(854) 00:11:47.721 fused_ordering(855) 00:11:47.721 fused_ordering(856) 00:11:47.721 fused_ordering(857) 00:11:47.721 fused_ordering(858) 00:11:47.721 fused_ordering(859) 00:11:47.721 fused_ordering(860) 00:11:47.721 fused_ordering(861) 00:11:47.721 fused_ordering(862) 00:11:47.721 fused_ordering(863) 00:11:47.721 fused_ordering(864) 00:11:47.721 fused_ordering(865) 00:11:47.721 fused_ordering(866) 00:11:47.721 fused_ordering(867) 00:11:47.721 fused_ordering(868) 00:11:47.721 fused_ordering(869) 00:11:47.721 fused_ordering(870) 00:11:47.721 fused_ordering(871) 00:11:47.721 fused_ordering(872) 00:11:47.721 fused_ordering(873) 00:11:47.721 fused_ordering(874) 00:11:47.721 fused_ordering(875) 00:11:47.721 fused_ordering(876) 00:11:47.721 fused_ordering(877) 00:11:47.721 fused_ordering(878) 00:11:47.721 fused_ordering(879) 00:11:47.721 fused_ordering(880) 00:11:47.721 fused_ordering(881) 00:11:47.721 fused_ordering(882) 00:11:47.721 fused_ordering(883) 00:11:47.721 fused_ordering(884) 00:11:47.721 fused_ordering(885) 00:11:47.721 fused_ordering(886) 00:11:47.721 fused_ordering(887) 00:11:47.721 fused_ordering(888) 00:11:47.721 fused_ordering(889) 00:11:47.721 fused_ordering(890) 00:11:47.721 fused_ordering(891) 00:11:47.721 fused_ordering(892) 00:11:47.721 fused_ordering(893) 00:11:47.721 fused_ordering(894) 00:11:47.721 fused_ordering(895) 00:11:47.721 fused_ordering(896) 00:11:47.721 fused_ordering(897) 00:11:47.721 fused_ordering(898) 00:11:47.721 fused_ordering(899) 00:11:47.721 fused_ordering(900) 00:11:47.721 fused_ordering(901) 00:11:47.721 fused_ordering(902) 00:11:47.721 fused_ordering(903) 00:11:47.721 fused_ordering(904) 00:11:47.721 fused_ordering(905) 00:11:47.721 fused_ordering(906) 00:11:47.721 fused_ordering(907) 00:11:47.721 fused_ordering(908) 00:11:47.721 fused_ordering(909) 00:11:47.721 fused_ordering(910) 00:11:47.721 fused_ordering(911) 00:11:47.721 fused_ordering(912) 00:11:47.721 fused_ordering(913) 00:11:47.721 fused_ordering(914) 00:11:47.721 fused_ordering(915) 00:11:47.721 fused_ordering(916) 00:11:47.721 fused_ordering(917) 00:11:47.721 fused_ordering(918) 00:11:47.721 fused_ordering(919) 00:11:47.721 fused_ordering(920) 00:11:47.721 fused_ordering(921) 00:11:47.721 fused_ordering(922) 00:11:47.721 fused_ordering(923) 00:11:47.721 fused_ordering(924) 00:11:47.721 fused_ordering(925) 00:11:47.721 fused_ordering(926) 00:11:47.721 fused_ordering(927) 00:11:47.721 fused_ordering(928) 00:11:47.721 fused_ordering(929) 00:11:47.721 fused_ordering(930) 00:11:47.721 fused_ordering(931) 00:11:47.721 fused_ordering(932) 00:11:47.721 fused_ordering(933) 00:11:47.721 fused_ordering(934) 00:11:47.721 fused_ordering(935) 00:11:47.721 fused_ordering(936) 00:11:47.721 fused_ordering(937) 00:11:47.721 fused_ordering(938) 00:11:47.721 fused_ordering(939) 00:11:47.721 fused_ordering(940) 00:11:47.721 fused_ordering(941) 00:11:47.721 fused_ordering(942) 00:11:47.721 fused_ordering(943) 00:11:47.721 fused_ordering(944) 00:11:47.721 fused_ordering(945) 00:11:47.721 fused_ordering(946) 00:11:47.721 fused_ordering(947) 00:11:47.721 fused_ordering(948) 00:11:47.721 fused_ordering(949) 00:11:47.721 fused_ordering(950) 00:11:47.721 fused_ordering(951) 00:11:47.721 fused_ordering(952) 00:11:47.721 fused_ordering(953) 00:11:47.721 fused_ordering(954) 00:11:47.721 fused_ordering(955) 00:11:47.721 fused_ordering(956) 00:11:47.721 fused_ordering(957) 00:11:47.721 fused_ordering(958) 00:11:47.721 fused_ordering(959) 00:11:47.721 fused_ordering(960) 00:11:47.721 fused_ordering(961) 00:11:47.721 fused_ordering(962) 00:11:47.721 fused_ordering(963) 00:11:47.721 fused_ordering(964) 00:11:47.721 fused_ordering(965) 00:11:47.721 fused_ordering(966) 00:11:47.721 fused_ordering(967) 00:11:47.721 fused_ordering(968) 00:11:47.721 fused_ordering(969) 00:11:47.721 fused_ordering(970) 00:11:47.721 fused_ordering(971) 00:11:47.721 fused_ordering(972) 00:11:47.721 fused_ordering(973) 00:11:47.721 fused_ordering(974) 00:11:47.721 fused_ordering(975) 00:11:47.721 fused_ordering(976) 00:11:47.721 fused_ordering(977) 00:11:47.721 fused_ordering(978) 00:11:47.721 fused_ordering(979) 00:11:47.721 fused_ordering(980) 00:11:47.721 fused_ordering(981) 00:11:47.721 fused_ordering(982) 00:11:47.721 fused_ordering(983) 00:11:47.721 fused_ordering(984) 00:11:47.721 fused_ordering(985) 00:11:47.721 fused_ordering(986) 00:11:47.721 fused_ordering(987) 00:11:47.721 fused_ordering(988) 00:11:47.721 fused_ordering(989) 00:11:47.721 fused_ordering(990) 00:11:47.721 fused_ordering(991) 00:11:47.721 fused_ordering(992) 00:11:47.721 fused_ordering(993) 00:11:47.721 fused_ordering(994) 00:11:47.721 fused_ordering(995) 00:11:47.721 fused_ordering(996) 00:11:47.721 fused_ordering(997) 00:11:47.721 fused_ordering(998) 00:11:47.721 fused_ordering(999) 00:11:47.721 fused_ordering(1000) 00:11:47.721 fused_ordering(1001) 00:11:47.721 fused_ordering(1002) 00:11:47.721 fused_ordering(1003) 00:11:47.721 fused_ordering(1004) 00:11:47.721 fused_ordering(1005) 00:11:47.721 fused_ordering(1006) 00:11:47.721 fused_ordering(1007) 00:11:47.721 fused_ordering(1008) 00:11:47.721 fused_ordering(1009) 00:11:47.721 fused_ordering(1010) 00:11:47.721 fused_ordering(1011) 00:11:47.721 fused_ordering(1012) 00:11:47.721 fused_ordering(1013) 00:11:47.721 fused_ordering(1014) 00:11:47.721 fused_ordering(1015) 00:11:47.721 fused_ordering(1016) 00:11:47.721 fused_ordering(1017) 00:11:47.721 fused_ordering(1018) 00:11:47.721 fused_ordering(1019) 00:11:47.721 fused_ordering(1020) 00:11:47.721 fused_ordering(1021) 00:11:47.721 fused_ordering(1022) 00:11:47.721 fused_ordering(1023) 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:47.721 rmmod nvme_tcp 00:11:47.721 rmmod nvme_fabrics 00:11:47.721 rmmod nvme_keyring 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 2842957 ']' 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 2842957 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@950 -- # '[' -z 2842957 ']' 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # kill -0 2842957 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # uname 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2842957 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:11:47.721 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2842957' 00:11:47.722 killing process with pid 2842957 00:11:47.722 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@969 -- # kill 2842957 00:11:47.722 12:12:40 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@974 -- # wait 2842957 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:47.980 12:12:41 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:50.518 00:11:50.518 real 0m8.718s 00:11:50.518 user 0m6.385s 00:11:50.518 sys 0m3.798s 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1126 -- # xtrace_disable 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:11:50.518 ************************************ 00:11:50.518 END TEST nvmf_fused_ordering 00:11:50.518 ************************************ 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:50.518 ************************************ 00:11:50.518 START TEST nvmf_ns_masking 00:11:50.518 ************************************ 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1125 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:11:50.518 * Looking for test storage... 00:11:50.518 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:50.518 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=36a2a8d6-4dff-4e3b-a6d0-b4cf5cc37fd1 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=795096a3-b951-4273-9630-19722961d6dd 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=42e7ad85-a8ad-4398-a6b1-e2875f7abe8f 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:11:50.519 12:12:43 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:11:52.423 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:11:52.423 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:52.423 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:11:52.424 Found net devices under 0000:0a:00.0: cvl_0_0 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:11:52.424 Found net devices under 0000:0a:00.1: cvl_0_1 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:52.424 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:52.424 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:11:52.424 00:11:52.424 --- 10.0.0.2 ping statistics --- 00:11:52.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.424 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:52.424 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:52.424 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.096 ms 00:11:52.424 00:11:52.424 --- 10.0.0.1 ping statistics --- 00:11:52.424 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:52.424 rtt min/avg/max/mdev = 0.096/0.096/0.096/0.000 ms 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@724 -- # xtrace_disable 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=2845316 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 2845316 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2845316 ']' 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:11:52.424 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:52.425 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:52.425 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:11:52.425 12:12:45 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:52.425 [2024-07-26 12:12:45.529924] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:11:52.425 [2024-07-26 12:12:45.530008] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:52.425 EAL: No free 2048 kB hugepages reported on node 1 00:11:52.425 [2024-07-26 12:12:45.601230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:52.683 [2024-07-26 12:12:45.722551] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:52.683 [2024-07-26 12:12:45.722610] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:52.683 [2024-07-26 12:12:45.722628] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:52.683 [2024-07-26 12:12:45.722642] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:52.683 [2024-07-26 12:12:45.722653] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:52.683 [2024-07-26 12:12:45.722684] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@730 -- # xtrace_disable 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:11:53.653 [2024-07-26 12:12:46.767838] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:11:53.653 12:12:46 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:11:53.910 Malloc1 00:11:53.910 12:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:11:54.168 Malloc2 00:11:54.168 12:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:54.426 12:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:11:54.683 12:12:47 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:54.941 [2024-07-26 12:12:48.040375] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 42e7ad85-a8ad-4398-a6b1-e2875f7abe8f -a 10.0.0.2 -s 4420 -i 4 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:54.941 12:12:48 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.480 [ 0]:0x1 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8753cb54b3e4f798d87799ac8c8bad8 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8753cb54b3e4f798d87799ac8c8bad8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:11:57.480 [ 0]:0x1 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8753cb54b3e4f798d87799ac8c8bad8 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8753cb54b3e4f798d87799ac8c8bad8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:11:57.480 [ 1]:0x2 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:57.480 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.480 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:11:57.738 12:12:50 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 42e7ad85-a8ad-4398-a6b1-e2875f7abe8f -a 10.0.0.2 -s 4420 -i 4 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:11:58.308 12:12:51 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.851 [ 0]:0x2 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:00.851 [ 0]:0x1 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8753cb54b3e4f798d87799ac8c8bad8 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8753cb54b3e4f798d87799ac8c8bad8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:00.851 [ 1]:0x2 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:00.851 12:12:53 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:01.109 [ 0]:0x2 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:01.109 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:01.109 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:01.368 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:12:01.368 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 42e7ad85-a8ad-4398-a6b1-e2875f7abe8f -a 10.0.0.2 -s 4420 -i 4 00:12:01.628 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:01.628 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:12:01.628 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:01.628 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:01.628 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:01.628 12:12:54 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.168 [ 0]:0x1 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b8753cb54b3e4f798d87799ac8c8bad8 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b8753cb54b3e4f798d87799ac8c8bad8 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.168 [ 1]:0x2 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.168 12:12:56 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.168 [ 0]:0x2 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.168 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:12:04.169 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:12:04.428 [2024-07-26 12:12:57.641265] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:12:04.428 request: 00:12:04.428 { 00:12:04.428 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:04.428 "nsid": 2, 00:12:04.428 "host": "nqn.2016-06.io.spdk:host1", 00:12:04.428 "method": "nvmf_ns_remove_host", 00:12:04.428 "req_id": 1 00:12:04.428 } 00:12:04.428 Got JSON-RPC error response 00:12:04.428 response: 00:12:04.428 { 00:12:04.428 "code": -32602, 00:12:04.428 "message": "Invalid parameters" 00:12:04.428 } 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # local es=0 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # valid_exec_arg ns_is_visible 0x1 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@638 -- # local arg=ns_is_visible 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -t ns_is_visible 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # ns_is_visible 0x1 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:12:04.428 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@653 -- # es=1 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:12:04.687 [ 0]:0x2 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=d4ccf5fff5774e12ba995e3977f780f9 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ d4ccf5fff5774e12ba995e3977f780f9 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:04.687 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=2846949 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 2846949 /var/tmp/host.sock 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@831 -- # '[' -z 2846949 ']' 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:12:04.687 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:04.687 12:12:57 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:04.946 [2024-07-26 12:12:57.977611] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:12:04.946 [2024-07-26 12:12:57.977701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2846949 ] 00:12:04.946 EAL: No free 2048 kB hugepages reported on node 1 00:12:04.946 [2024-07-26 12:12:58.044899] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:04.946 [2024-07-26 12:12:58.164395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:05.880 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:05.880 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # return 0 00:12:05.880 12:12:58 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:06.139 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:12:06.397 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 36a2a8d6-4dff-4e3b-a6d0-b4cf5cc37fd1 00:12:06.397 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:06.397 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 36A2A8D64DFF4E3BA6D0B4CF5CC37FD1 -i 00:12:06.654 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 795096a3-b951-4273-9630-19722961d6dd 00:12:06.654 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:12:06.654 12:12:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 795096A3B9514273963019722961D6DD -i 00:12:06.912 12:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:12:07.169 12:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:12:07.427 12:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:07.427 12:13:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:12:07.994 nvme0n1 00:12:07.994 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:07.994 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:12:08.561 nvme1n2 00:12:08.561 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:12:08.561 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:12:08.561 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:12:08.561 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:12:08.561 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:12:08.819 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:12:08.819 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:12:08.819 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:12:08.819 12:13:01 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:12:08.819 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 36a2a8d6-4dff-4e3b-a6d0-b4cf5cc37fd1 == \3\6\a\2\a\8\d\6\-\4\d\f\f\-\4\e\3\b\-\a\6\d\0\-\b\4\c\f\5\c\c\3\7\f\d\1 ]] 00:12:08.819 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:12:08.819 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:12:08.819 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 795096a3-b951-4273-9630-19722961d6dd == \7\9\5\0\9\6\a\3\-\b\9\5\1\-\4\2\7\3\-\9\6\3\0\-\1\9\7\2\2\9\6\1\d\6\d\d ]] 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 2846949 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2846949 ']' 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2846949 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:09.077 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2846949 00:12:09.336 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:12:09.336 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:12:09.336 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2846949' 00:12:09.336 killing process with pid 2846949 00:12:09.336 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2846949 00:12:09.336 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2846949 00:12:09.594 12:13:02 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.852 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.852 rmmod nvme_tcp 00:12:09.852 rmmod nvme_fabrics 00:12:10.112 rmmod nvme_keyring 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 2845316 ']' 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 2845316 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@950 -- # '[' -z 2845316 ']' 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # kill -0 2845316 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # uname 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2845316 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2845316' 00:12:10.112 killing process with pid 2845316 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@969 -- # kill 2845316 00:12:10.112 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@974 -- # wait 2845316 00:12:10.370 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:10.371 12:13:03 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.309 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.309 00:12:12.309 real 0m22.264s 00:12:12.309 user 0m29.834s 00:12:12.309 sys 0m4.199s 00:12:12.309 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:12.309 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:12:12.309 ************************************ 00:12:12.309 END TEST nvmf_ns_masking 00:12:12.309 ************************************ 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.568 ************************************ 00:12:12.568 START TEST nvmf_nvme_cli 00:12:12.568 ************************************ 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:12:12.568 * Looking for test storage... 00:12:12.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.568 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.569 12:13:05 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:12:14.471 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:12:14.471 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:12:14.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:14.471 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:12:14.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:14.472 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:14.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:14.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.262 ms 00:12:14.730 00:12:14.730 --- 10.0.0.2 ping statistics --- 00:12:14.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.730 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:14.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:14.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:12:14.730 00:12:14.730 --- 10.0.0.1 ping statistics --- 00:12:14.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:14.730 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@724 -- # xtrace_disable 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=2849572 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 2849572 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@831 -- # '[' -z 2849572 ']' 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:14.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:14.730 12:13:07 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:14.730 [2024-07-26 12:13:07.898620] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:12:14.730 [2024-07-26 12:13:07.898710] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:14.730 EAL: No free 2048 kB hugepages reported on node 1 00:12:14.730 [2024-07-26 12:13:07.975592] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:14.990 [2024-07-26 12:13:08.099524] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:14.990 [2024-07-26 12:13:08.099584] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:14.990 [2024-07-26 12:13:08.099601] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:14.990 [2024-07-26 12:13:08.099615] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:14.990 [2024-07-26 12:13:08.099627] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:14.990 [2024-07-26 12:13:08.099696] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:14.990 [2024-07-26 12:13:08.099753] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:14.990 [2024-07-26 12:13:08.099807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:14.990 [2024-07-26 12:13:08.099810] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # return 0 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@730 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 [2024-07-26 12:13:08.879582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 Malloc0 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 Malloc1 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 [2024-07-26 12:13:08.965682] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:15.928 12:13:08 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -a 10.0.0.2 -s 4420 00:12:15.928 00:12:15.928 Discovery Log Number of Records 2, Generation counter 2 00:12:15.928 =====Discovery Log Entry 0====== 00:12:15.928 trtype: tcp 00:12:15.928 adrfam: ipv4 00:12:15.928 subtype: current discovery subsystem 00:12:15.928 treq: not required 00:12:15.928 portid: 0 00:12:15.928 trsvcid: 4420 00:12:15.928 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:15.928 traddr: 10.0.0.2 00:12:15.928 eflags: explicit discovery connections, duplicate discovery information 00:12:15.928 sectype: none 00:12:15.928 =====Discovery Log Entry 1====== 00:12:15.928 trtype: tcp 00:12:15.928 adrfam: ipv4 00:12:15.928 subtype: nvme subsystem 00:12:15.928 treq: not required 00:12:15.928 portid: 0 00:12:15.928 trsvcid: 4420 00:12:15.928 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:15.928 traddr: 10.0.0.2 00:12:15.928 eflags: none 00:12:15.928 sectype: none 00:12:15.928 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:12:15.928 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:12:15.929 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:16.495 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:12:16.495 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:12:16.495 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:12:16.495 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:12:16.495 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:12:16.495 12:13:09 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.033 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:12:19.034 /dev/nvme0n1 ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.034 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@561 -- # xtrace_disable 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.034 rmmod nvme_tcp 00:12:19.034 rmmod nvme_fabrics 00:12:19.034 rmmod nvme_keyring 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 2849572 ']' 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 2849572 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@950 -- # '[' -z 2849572 ']' 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # kill -0 2849572 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # uname 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2849572 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2849572' 00:12:19.034 killing process with pid 2849572 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@969 -- # kill 2849572 00:12:19.034 12:13:11 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@974 -- # wait 2849572 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.034 12:13:12 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.572 00:12:21.572 real 0m8.727s 00:12:21.572 user 0m17.169s 00:12:21.572 sys 0m2.269s 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1126 -- # xtrace_disable 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:12:21.572 ************************************ 00:12:21.572 END TEST nvmf_nvme_cli 00:12:21.572 ************************************ 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.572 ************************************ 00:12:21.572 START TEST nvmf_vfio_user 00:12:21.572 ************************************ 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:12:21.572 * Looking for test storage... 00:12:21.572 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2850493 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2850493' 00:12:21.572 Process pid: 2850493 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2850493 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2850493 ']' 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:21.572 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:12:21.573 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:21.573 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:21.573 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:12:21.573 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:12:21.573 [2024-07-26 12:13:14.488098] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:12:21.573 [2024-07-26 12:13:14.488199] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:21.573 EAL: No free 2048 kB hugepages reported on node 1 00:12:21.573 [2024-07-26 12:13:14.546188] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:21.573 [2024-07-26 12:13:14.654172] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:21.573 [2024-07-26 12:13:14.654228] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:21.573 [2024-07-26 12:13:14.654257] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:21.573 [2024-07-26 12:13:14.654277] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:21.573 [2024-07-26 12:13:14.654288] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:21.573 [2024-07-26 12:13:14.654340] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:21.573 [2024-07-26 12:13:14.654385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:21.573 [2024-07-26 12:13:14.654441] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:21.573 [2024-07-26 12:13:14.654443] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:21.573 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:12:21.573 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:12:21.573 12:13:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:12:22.951 12:13:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:12:22.951 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:12:22.951 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:12:22.951 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:22.951 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:12:22.951 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:12:23.208 Malloc1 00:12:23.208 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:12:23.466 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:12:23.724 12:13:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:12:23.982 12:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:23.982 12:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:12:23.982 12:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:12:24.239 Malloc2 00:12:24.239 12:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:12:24.806 12:13:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:12:24.806 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:12:25.065 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:12:25.065 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:12:25.065 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:25.065 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:25.065 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:12:25.065 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:25.065 [2024-07-26 12:13:18.292559] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:12:25.065 [2024-07-26 12:13:18.292605] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2850920 ] 00:12:25.065 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.338 [2024-07-26 12:13:18.327772] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:12:25.338 [2024-07-26 12:13:18.336556] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.338 [2024-07-26 12:13:18.336584] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7fd4c2b65000 00:12:25.338 [2024-07-26 12:13:18.337551] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.338550] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.339553] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.340560] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.341566] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.342571] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.343578] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.344589] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:25.338 [2024-07-26 12:13:18.345600] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:25.338 [2024-07-26 12:13:18.345620] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7fd4c2b5a000 00:12:25.338 [2024-07-26 12:13:18.346968] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.338 [2024-07-26 12:13:18.367354] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:12:25.338 [2024-07-26 12:13:18.367409] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:12:25.338 [2024-07-26 12:13:18.369720] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:25.338 [2024-07-26 12:13:18.369778] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:25.338 [2024-07-26 12:13:18.369876] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:12:25.338 [2024-07-26 12:13:18.369910] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:12:25.338 [2024-07-26 12:13:18.369922] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:12:25.338 [2024-07-26 12:13:18.370708] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:12:25.338 [2024-07-26 12:13:18.370734] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:12:25.338 [2024-07-26 12:13:18.370748] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:12:25.338 [2024-07-26 12:13:18.371711] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:12:25.338 [2024-07-26 12:13:18.371731] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:12:25.338 [2024-07-26 12:13:18.371745] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:12:25.338 [2024-07-26 12:13:18.372714] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:12:25.338 [2024-07-26 12:13:18.372734] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:25.338 [2024-07-26 12:13:18.373720] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:12:25.338 [2024-07-26 12:13:18.373739] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:12:25.338 [2024-07-26 12:13:18.373748] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:12:25.338 [2024-07-26 12:13:18.373759] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:25.338 [2024-07-26 12:13:18.373870] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:12:25.338 [2024-07-26 12:13:18.373878] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:25.338 [2024-07-26 12:13:18.373887] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:12:25.338 [2024-07-26 12:13:18.378070] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:12:25.338 [2024-07-26 12:13:18.378742] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:12:25.338 [2024-07-26 12:13:18.379750] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:25.338 [2024-07-26 12:13:18.380737] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:25.338 [2024-07-26 12:13:18.380846] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:25.338 [2024-07-26 12:13:18.381758] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:12:25.338 [2024-07-26 12:13:18.381776] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:25.338 [2024-07-26 12:13:18.381785] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.381809] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:12:25.338 [2024-07-26 12:13:18.381826] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.381858] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.338 [2024-07-26 12:13:18.381868] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.338 [2024-07-26 12:13:18.381875] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.338 [2024-07-26 12:13:18.381898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.381955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:25.338 [2024-07-26 12:13:18.381975] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:12:25.338 [2024-07-26 12:13:18.381983] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:12:25.338 [2024-07-26 12:13:18.381991] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:12:25.338 [2024-07-26 12:13:18.381999] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:25.338 [2024-07-26 12:13:18.382007] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:12:25.338 [2024-07-26 12:13:18.382014] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:12:25.338 [2024-07-26 12:13:18.382022] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382035] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382077] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.382096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:25.338 [2024-07-26 12:13:18.382120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.338 [2024-07-26 12:13:18.382134] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.338 [2024-07-26 12:13:18.382147] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.338 [2024-07-26 12:13:18.382159] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:25.338 [2024-07-26 12:13:18.382167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382186] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.382214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:25.338 [2024-07-26 12:13:18.382226] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:12:25.338 [2024-07-26 12:13:18.382235] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382254] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382267] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382280] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.382292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:25.338 [2024-07-26 12:13:18.382377] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382395] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382410] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:25.338 [2024-07-26 12:13:18.382433] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:25.338 [2024-07-26 12:13:18.382439] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.338 [2024-07-26 12:13:18.382448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.382465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:25.338 [2024-07-26 12:13:18.382485] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:12:25.338 [2024-07-26 12:13:18.382614] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382633] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382646] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.338 [2024-07-26 12:13:18.382654] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.338 [2024-07-26 12:13:18.382660] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.338 [2024-07-26 12:13:18.382669] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.382698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:25.338 [2024-07-26 12:13:18.382722] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382737] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:25.338 [2024-07-26 12:13:18.382749] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:25.338 [2024-07-26 12:13:18.382757] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.338 [2024-07-26 12:13:18.382763] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.338 [2024-07-26 12:13:18.382772] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.338 [2024-07-26 12:13:18.382786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.382804] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382816] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382830] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382844] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382853] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382862] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382871] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:12:25.339 [2024-07-26 12:13:18.382879] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:12:25.339 [2024-07-26 12:13:18.382887] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:12:25.339 [2024-07-26 12:13:18.382915] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.382934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.382953] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.382965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.382981] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.382993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.383009] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.383020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.383069] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:25.339 [2024-07-26 12:13:18.383082] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:25.339 [2024-07-26 12:13:18.383089] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:25.339 [2024-07-26 12:13:18.383095] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:25.339 [2024-07-26 12:13:18.383102] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:25.339 [2024-07-26 12:13:18.383112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:25.339 [2024-07-26 12:13:18.383124] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:25.339 [2024-07-26 12:13:18.383133] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:25.339 [2024-07-26 12:13:18.383139] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.339 [2024-07-26 12:13:18.383148] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.383164] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:25.339 [2024-07-26 12:13:18.383173] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:25.339 [2024-07-26 12:13:18.383179] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.339 [2024-07-26 12:13:18.383189] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.383201] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:25.339 [2024-07-26 12:13:18.383210] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:25.339 [2024-07-26 12:13:18.383216] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:25.339 [2024-07-26 12:13:18.383225] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:25.339 [2024-07-26 12:13:18.383237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.383258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.383279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:25.339 [2024-07-26 12:13:18.383293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:25.339 ===================================================== 00:12:25.339 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:25.339 ===================================================== 00:12:25.339 Controller Capabilities/Features 00:12:25.339 ================================ 00:12:25.339 Vendor ID: 4e58 00:12:25.339 Subsystem Vendor ID: 4e58 00:12:25.339 Serial Number: SPDK1 00:12:25.339 Model Number: SPDK bdev Controller 00:12:25.339 Firmware Version: 24.09 00:12:25.339 Recommended Arb Burst: 6 00:12:25.339 IEEE OUI Identifier: 8d 6b 50 00:12:25.339 Multi-path I/O 00:12:25.339 May have multiple subsystem ports: Yes 00:12:25.339 May have multiple controllers: Yes 00:12:25.339 Associated with SR-IOV VF: No 00:12:25.339 Max Data Transfer Size: 131072 00:12:25.339 Max Number of Namespaces: 32 00:12:25.339 Max Number of I/O Queues: 127 00:12:25.339 NVMe Specification Version (VS): 1.3 00:12:25.339 NVMe Specification Version (Identify): 1.3 00:12:25.339 Maximum Queue Entries: 256 00:12:25.339 Contiguous Queues Required: Yes 00:12:25.339 Arbitration Mechanisms Supported 00:12:25.339 Weighted Round Robin: Not Supported 00:12:25.339 Vendor Specific: Not Supported 00:12:25.339 Reset Timeout: 15000 ms 00:12:25.339 Doorbell Stride: 4 bytes 00:12:25.339 NVM Subsystem Reset: Not Supported 00:12:25.339 Command Sets Supported 00:12:25.339 NVM Command Set: Supported 00:12:25.339 Boot Partition: Not Supported 00:12:25.339 Memory Page Size Minimum: 4096 bytes 00:12:25.339 Memory Page Size Maximum: 4096 bytes 00:12:25.339 Persistent Memory Region: Not Supported 00:12:25.339 Optional Asynchronous Events Supported 00:12:25.339 Namespace Attribute Notices: Supported 00:12:25.339 Firmware Activation Notices: Not Supported 00:12:25.339 ANA Change Notices: Not Supported 00:12:25.339 PLE Aggregate Log Change Notices: Not Supported 00:12:25.339 LBA Status Info Alert Notices: Not Supported 00:12:25.339 EGE Aggregate Log Change Notices: Not Supported 00:12:25.339 Normal NVM Subsystem Shutdown event: Not Supported 00:12:25.339 Zone Descriptor Change Notices: Not Supported 00:12:25.339 Discovery Log Change Notices: Not Supported 00:12:25.339 Controller Attributes 00:12:25.339 128-bit Host Identifier: Supported 00:12:25.339 Non-Operational Permissive Mode: Not Supported 00:12:25.339 NVM Sets: Not Supported 00:12:25.339 Read Recovery Levels: Not Supported 00:12:25.339 Endurance Groups: Not Supported 00:12:25.339 Predictable Latency Mode: Not Supported 00:12:25.339 Traffic Based Keep ALive: Not Supported 00:12:25.339 Namespace Granularity: Not Supported 00:12:25.339 SQ Associations: Not Supported 00:12:25.339 UUID List: Not Supported 00:12:25.339 Multi-Domain Subsystem: Not Supported 00:12:25.339 Fixed Capacity Management: Not Supported 00:12:25.339 Variable Capacity Management: Not Supported 00:12:25.339 Delete Endurance Group: Not Supported 00:12:25.339 Delete NVM Set: Not Supported 00:12:25.339 Extended LBA Formats Supported: Not Supported 00:12:25.339 Flexible Data Placement Supported: Not Supported 00:12:25.339 00:12:25.339 Controller Memory Buffer Support 00:12:25.339 ================================ 00:12:25.339 Supported: No 00:12:25.339 00:12:25.339 Persistent Memory Region Support 00:12:25.339 ================================ 00:12:25.339 Supported: No 00:12:25.339 00:12:25.339 Admin Command Set Attributes 00:12:25.339 ============================ 00:12:25.339 Security Send/Receive: Not Supported 00:12:25.339 Format NVM: Not Supported 00:12:25.339 Firmware Activate/Download: Not Supported 00:12:25.339 Namespace Management: Not Supported 00:12:25.339 Device Self-Test: Not Supported 00:12:25.339 Directives: Not Supported 00:12:25.339 NVMe-MI: Not Supported 00:12:25.339 Virtualization Management: Not Supported 00:12:25.339 Doorbell Buffer Config: Not Supported 00:12:25.339 Get LBA Status Capability: Not Supported 00:12:25.339 Command & Feature Lockdown Capability: Not Supported 00:12:25.339 Abort Command Limit: 4 00:12:25.339 Async Event Request Limit: 4 00:12:25.339 Number of Firmware Slots: N/A 00:12:25.339 Firmware Slot 1 Read-Only: N/A 00:12:25.339 Firmware Activation Without Reset: N/A 00:12:25.339 Multiple Update Detection Support: N/A 00:12:25.339 Firmware Update Granularity: No Information Provided 00:12:25.339 Per-Namespace SMART Log: No 00:12:25.339 Asymmetric Namespace Access Log Page: Not Supported 00:12:25.339 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:12:25.339 Command Effects Log Page: Supported 00:12:25.339 Get Log Page Extended Data: Supported 00:12:25.339 Telemetry Log Pages: Not Supported 00:12:25.339 Persistent Event Log Pages: Not Supported 00:12:25.339 Supported Log Pages Log Page: May Support 00:12:25.339 Commands Supported & Effects Log Page: Not Supported 00:12:25.339 Feature Identifiers & Effects Log Page:May Support 00:12:25.339 NVMe-MI Commands & Effects Log Page: May Support 00:12:25.339 Data Area 4 for Telemetry Log: Not Supported 00:12:25.339 Error Log Page Entries Supported: 128 00:12:25.339 Keep Alive: Supported 00:12:25.339 Keep Alive Granularity: 10000 ms 00:12:25.339 00:12:25.339 NVM Command Set Attributes 00:12:25.339 ========================== 00:12:25.339 Submission Queue Entry Size 00:12:25.339 Max: 64 00:12:25.339 Min: 64 00:12:25.339 Completion Queue Entry Size 00:12:25.339 Max: 16 00:12:25.339 Min: 16 00:12:25.339 Number of Namespaces: 32 00:12:25.339 Compare Command: Supported 00:12:25.339 Write Uncorrectable Command: Not Supported 00:12:25.339 Dataset Management Command: Supported 00:12:25.339 Write Zeroes Command: Supported 00:12:25.339 Set Features Save Field: Not Supported 00:12:25.339 Reservations: Not Supported 00:12:25.339 Timestamp: Not Supported 00:12:25.339 Copy: Supported 00:12:25.339 Volatile Write Cache: Present 00:12:25.339 Atomic Write Unit (Normal): 1 00:12:25.339 Atomic Write Unit (PFail): 1 00:12:25.339 Atomic Compare & Write Unit: 1 00:12:25.339 Fused Compare & Write: Supported 00:12:25.339 Scatter-Gather List 00:12:25.339 SGL Command Set: Supported (Dword aligned) 00:12:25.339 SGL Keyed: Not Supported 00:12:25.339 SGL Bit Bucket Descriptor: Not Supported 00:12:25.339 SGL Metadata Pointer: Not Supported 00:12:25.340 Oversized SGL: Not Supported 00:12:25.340 SGL Metadata Address: Not Supported 00:12:25.340 SGL Offset: Not Supported 00:12:25.340 Transport SGL Data Block: Not Supported 00:12:25.340 Replay Protected Memory Block: Not Supported 00:12:25.340 00:12:25.340 Firmware Slot Information 00:12:25.340 ========================= 00:12:25.340 Active slot: 1 00:12:25.340 Slot 1 Firmware Revision: 24.09 00:12:25.340 00:12:25.340 00:12:25.340 Commands Supported and Effects 00:12:25.340 ============================== 00:12:25.340 Admin Commands 00:12:25.340 -------------- 00:12:25.340 Get Log Page (02h): Supported 00:12:25.340 Identify (06h): Supported 00:12:25.340 Abort (08h): Supported 00:12:25.340 Set Features (09h): Supported 00:12:25.340 Get Features (0Ah): Supported 00:12:25.340 Asynchronous Event Request (0Ch): Supported 00:12:25.340 Keep Alive (18h): Supported 00:12:25.340 I/O Commands 00:12:25.340 ------------ 00:12:25.340 Flush (00h): Supported LBA-Change 00:12:25.340 Write (01h): Supported LBA-Change 00:12:25.340 Read (02h): Supported 00:12:25.340 Compare (05h): Supported 00:12:25.340 Write Zeroes (08h): Supported LBA-Change 00:12:25.340 Dataset Management (09h): Supported LBA-Change 00:12:25.340 Copy (19h): Supported LBA-Change 00:12:25.340 00:12:25.340 Error Log 00:12:25.340 ========= 00:12:25.340 00:12:25.340 Arbitration 00:12:25.340 =========== 00:12:25.340 Arbitration Burst: 1 00:12:25.340 00:12:25.340 Power Management 00:12:25.340 ================ 00:12:25.340 Number of Power States: 1 00:12:25.340 Current Power State: Power State #0 00:12:25.340 Power State #0: 00:12:25.340 Max Power: 0.00 W 00:12:25.340 Non-Operational State: Operational 00:12:25.340 Entry Latency: Not Reported 00:12:25.340 Exit Latency: Not Reported 00:12:25.340 Relative Read Throughput: 0 00:12:25.340 Relative Read Latency: 0 00:12:25.340 Relative Write Throughput: 0 00:12:25.340 Relative Write Latency: 0 00:12:25.340 Idle Power: Not Reported 00:12:25.340 Active Power: Not Reported 00:12:25.340 Non-Operational Permissive Mode: Not Supported 00:12:25.340 00:12:25.340 Health Information 00:12:25.340 ================== 00:12:25.340 Critical Warnings: 00:12:25.340 Available Spare Space: OK 00:12:25.340 Temperature: OK 00:12:25.340 Device Reliability: OK 00:12:25.340 Read Only: No 00:12:25.340 Volatile Memory Backup: OK 00:12:25.340 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:25.340 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:25.340 Available Spare: 0% 00:12:25.340 Available Sp[2024-07-26 12:13:18.383440] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:25.340 [2024-07-26 12:13:18.383457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:25.340 [2024-07-26 12:13:18.383500] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:12:25.340 [2024-07-26 12:13:18.383518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.340 [2024-07-26 12:13:18.383530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.340 [2024-07-26 12:13:18.383539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.340 [2024-07-26 12:13:18.383549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:25.340 [2024-07-26 12:13:18.383770] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:12:25.340 [2024-07-26 12:13:18.383792] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:12:25.340 [2024-07-26 12:13:18.384771] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:25.340 [2024-07-26 12:13:18.384858] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:12:25.340 [2024-07-26 12:13:18.384874] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:12:25.340 [2024-07-26 12:13:18.385776] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:12:25.340 [2024-07-26 12:13:18.385800] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:12:25.340 [2024-07-26 12:13:18.385859] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:12:25.340 [2024-07-26 12:13:18.388071] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:25.340 are Threshold: 0% 00:12:25.340 Life Percentage Used: 0% 00:12:25.340 Data Units Read: 0 00:12:25.340 Data Units Written: 0 00:12:25.340 Host Read Commands: 0 00:12:25.340 Host Write Commands: 0 00:12:25.340 Controller Busy Time: 0 minutes 00:12:25.340 Power Cycles: 0 00:12:25.340 Power On Hours: 0 hours 00:12:25.340 Unsafe Shutdowns: 0 00:12:25.340 Unrecoverable Media Errors: 0 00:12:25.340 Lifetime Error Log Entries: 0 00:12:25.340 Warning Temperature Time: 0 minutes 00:12:25.340 Critical Temperature Time: 0 minutes 00:12:25.340 00:12:25.340 Number of Queues 00:12:25.340 ================ 00:12:25.340 Number of I/O Submission Queues: 127 00:12:25.340 Number of I/O Completion Queues: 127 00:12:25.340 00:12:25.340 Active Namespaces 00:12:25.340 ================= 00:12:25.340 Namespace ID:1 00:12:25.340 Error Recovery Timeout: Unlimited 00:12:25.340 Command Set Identifier: NVM (00h) 00:12:25.340 Deallocate: Supported 00:12:25.340 Deallocated/Unwritten Error: Not Supported 00:12:25.340 Deallocated Read Value: Unknown 00:12:25.340 Deallocate in Write Zeroes: Not Supported 00:12:25.340 Deallocated Guard Field: 0xFFFF 00:12:25.340 Flush: Supported 00:12:25.340 Reservation: Supported 00:12:25.340 Namespace Sharing Capabilities: Multiple Controllers 00:12:25.340 Size (in LBAs): 131072 (0GiB) 00:12:25.340 Capacity (in LBAs): 131072 (0GiB) 00:12:25.340 Utilization (in LBAs): 131072 (0GiB) 00:12:25.340 NGUID: 23F04DC1B44E4EFF94A553C9C449C6B5 00:12:25.340 UUID: 23f04dc1-b44e-4eff-94a5-53c9c449c6b5 00:12:25.340 Thin Provisioning: Not Supported 00:12:25.340 Per-NS Atomic Units: Yes 00:12:25.340 Atomic Boundary Size (Normal): 0 00:12:25.340 Atomic Boundary Size (PFail): 0 00:12:25.340 Atomic Boundary Offset: 0 00:12:25.340 Maximum Single Source Range Length: 65535 00:12:25.340 Maximum Copy Length: 65535 00:12:25.340 Maximum Source Range Count: 1 00:12:25.340 NGUID/EUI64 Never Reused: No 00:12:25.340 Namespace Write Protected: No 00:12:25.340 Number of LBA Formats: 1 00:12:25.340 Current LBA Format: LBA Format #00 00:12:25.340 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:25.340 00:12:25.340 12:13:18 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:25.340 EAL: No free 2048 kB hugepages reported on node 1 00:12:25.601 [2024-07-26 12:13:18.615911] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:30.883 Initializing NVMe Controllers 00:12:30.883 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:30.883 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:30.883 Initialization complete. Launching workers. 00:12:30.883 ======================================================== 00:12:30.883 Latency(us) 00:12:30.883 Device Information : IOPS MiB/s Average min max 00:12:30.883 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 33284.19 130.02 3847.25 1186.57 7650.85 00:12:30.883 ======================================================== 00:12:30.883 Total : 33284.19 130.02 3847.25 1186.57 7650.85 00:12:30.883 00:12:30.883 [2024-07-26 12:13:23.638970] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:30.883 12:13:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:30.883 EAL: No free 2048 kB hugepages reported on node 1 00:12:30.883 [2024-07-26 12:13:23.870089] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:36.181 Initializing NVMe Controllers 00:12:36.181 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:36.181 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:12:36.181 Initialization complete. Launching workers. 00:12:36.181 ======================================================== 00:12:36.181 Latency(us) 00:12:36.181 Device Information : IOPS MiB/s Average min max 00:12:36.181 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16060.44 62.74 7975.15 6978.48 8118.84 00:12:36.181 ======================================================== 00:12:36.181 Total : 16060.44 62.74 7975.15 6978.48 8118.84 00:12:36.181 00:12:36.181 [2024-07-26 12:13:28.910598] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:36.181 12:13:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:36.181 EAL: No free 2048 kB hugepages reported on node 1 00:12:36.181 [2024-07-26 12:13:29.134716] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:41.450 [2024-07-26 12:13:34.207475] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:41.450 Initializing NVMe Controllers 00:12:41.450 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.450 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:12:41.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:12:41.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:12:41.450 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:12:41.450 Initialization complete. Launching workers. 00:12:41.450 Starting thread on core 2 00:12:41.450 Starting thread on core 3 00:12:41.450 Starting thread on core 1 00:12:41.450 12:13:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:12:41.450 EAL: No free 2048 kB hugepages reported on node 1 00:12:41.450 [2024-07-26 12:13:34.513583] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.741 [2024-07-26 12:13:37.582336] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.741 Initializing NVMe Controllers 00:12:44.741 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.741 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.741 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:12:44.741 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:12:44.741 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:12:44.741 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:12:44.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:12:44.741 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:12:44.741 Initialization complete. Launching workers. 00:12:44.741 Starting thread on core 1 with urgent priority queue 00:12:44.741 Starting thread on core 2 with urgent priority queue 00:12:44.741 Starting thread on core 3 with urgent priority queue 00:12:44.741 Starting thread on core 0 with urgent priority queue 00:12:44.741 SPDK bdev Controller (SPDK1 ) core 0: 5795.00 IO/s 17.26 secs/100000 ios 00:12:44.741 SPDK bdev Controller (SPDK1 ) core 1: 5985.00 IO/s 16.71 secs/100000 ios 00:12:44.741 SPDK bdev Controller (SPDK1 ) core 2: 5849.00 IO/s 17.10 secs/100000 ios 00:12:44.741 SPDK bdev Controller (SPDK1 ) core 3: 6290.00 IO/s 15.90 secs/100000 ios 00:12:44.741 ======================================================== 00:12:44.741 00:12:44.742 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:44.742 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.742 [2024-07-26 12:13:37.884570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:44.742 Initializing NVMe Controllers 00:12:44.742 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.742 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:44.742 Namespace ID: 1 size: 0GB 00:12:44.742 Initialization complete. 00:12:44.742 INFO: using host memory buffer for IO 00:12:44.742 Hello world! 00:12:44.742 [2024-07-26 12:13:37.918195] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:44.742 12:13:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:12:44.999 EAL: No free 2048 kB hugepages reported on node 1 00:12:44.999 [2024-07-26 12:13:38.202332] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.380 Initializing NVMe Controllers 00:12:46.380 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.380 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.380 Initialization complete. Launching workers. 00:12:46.380 submit (in ns) avg, min, max = 6901.3, 3517.8, 4016848.9 00:12:46.380 complete (in ns) avg, min, max = 25575.1, 2063.3, 4017302.2 00:12:46.380 00:12:46.380 Submit histogram 00:12:46.380 ================ 00:12:46.380 Range in us Cumulative Count 00:12:46.380 3.508 - 3.532: 0.0972% ( 13) 00:12:46.380 3.532 - 3.556: 0.7179% ( 83) 00:12:46.380 3.556 - 3.579: 1.7499% ( 138) 00:12:46.380 3.579 - 3.603: 6.1547% ( 589) 00:12:46.380 3.603 - 3.627: 11.8830% ( 766) 00:12:46.380 3.627 - 3.650: 21.4852% ( 1284) 00:12:46.380 3.650 - 3.674: 31.1696% ( 1295) 00:12:46.380 3.674 - 3.698: 42.0356% ( 1453) 00:12:46.380 3.698 - 3.721: 50.0075% ( 1066) 00:12:46.380 3.721 - 3.745: 56.0724% ( 811) 00:12:46.380 3.745 - 3.769: 60.5968% ( 605) 00:12:46.380 3.769 - 3.793: 65.0688% ( 598) 00:12:46.380 3.793 - 3.816: 68.4116% ( 447) 00:12:46.380 3.816 - 3.840: 71.5525% ( 420) 00:12:46.380 3.840 - 3.864: 74.7532% ( 428) 00:12:46.380 3.864 - 3.887: 78.1633% ( 456) 00:12:46.380 3.887 - 3.911: 82.0820% ( 524) 00:12:46.380 3.911 - 3.935: 85.4472% ( 450) 00:12:46.380 3.935 - 3.959: 87.7879% ( 313) 00:12:46.380 3.959 - 3.982: 89.3584% ( 210) 00:12:46.380 3.982 - 4.006: 90.9587% ( 214) 00:12:46.380 4.006 - 4.030: 92.4170% ( 195) 00:12:46.380 4.030 - 4.053: 93.4191% ( 134) 00:12:46.380 4.053 - 4.077: 94.4960% ( 144) 00:12:46.380 4.077 - 4.101: 95.3335% ( 112) 00:12:46.380 4.101 - 4.124: 95.8495% ( 69) 00:12:46.380 4.124 - 4.148: 96.3431% ( 66) 00:12:46.380 4.148 - 4.172: 96.6722% ( 44) 00:12:46.380 4.172 - 4.196: 96.8591% ( 25) 00:12:46.380 4.196 - 4.219: 96.9862% ( 17) 00:12:46.380 4.219 - 4.243: 97.1283% ( 19) 00:12:46.380 4.243 - 4.267: 97.1882% ( 8) 00:12:46.380 4.267 - 4.290: 97.2779% ( 12) 00:12:46.380 4.290 - 4.314: 97.3901% ( 15) 00:12:46.380 4.314 - 4.338: 97.4499% ( 8) 00:12:46.380 4.338 - 4.361: 97.4948% ( 6) 00:12:46.380 4.361 - 4.385: 97.5546% ( 8) 00:12:46.380 4.385 - 4.409: 97.6219% ( 9) 00:12:46.380 4.409 - 4.433: 97.6518% ( 4) 00:12:46.380 4.433 - 4.456: 97.6668% ( 2) 00:12:46.380 4.456 - 4.480: 97.6817% ( 2) 00:12:46.380 4.480 - 4.504: 97.6892% ( 1) 00:12:46.380 4.504 - 4.527: 97.7042% ( 2) 00:12:46.380 4.527 - 4.551: 97.7191% ( 2) 00:12:46.380 4.551 - 4.575: 97.7341% ( 2) 00:12:46.380 4.575 - 4.599: 97.7490% ( 2) 00:12:46.380 4.599 - 4.622: 97.7715% ( 3) 00:12:46.380 4.622 - 4.646: 97.7939% ( 3) 00:12:46.380 4.646 - 4.670: 97.8462% ( 7) 00:12:46.380 4.670 - 4.693: 97.8687% ( 3) 00:12:46.380 4.693 - 4.717: 97.9285% ( 8) 00:12:46.380 4.717 - 4.741: 97.9883% ( 8) 00:12:46.380 4.741 - 4.764: 98.0033% ( 2) 00:12:46.380 4.764 - 4.788: 98.0407% ( 5) 00:12:46.380 4.788 - 4.812: 98.0482% ( 1) 00:12:46.380 4.812 - 4.836: 98.0781% ( 4) 00:12:46.380 4.836 - 4.859: 98.1229% ( 6) 00:12:46.380 4.859 - 4.883: 98.1529% ( 4) 00:12:46.380 4.907 - 4.930: 98.1753% ( 3) 00:12:46.380 4.930 - 4.954: 98.1828% ( 1) 00:12:46.380 4.954 - 4.978: 98.1902% ( 1) 00:12:46.380 4.978 - 5.001: 98.2052% ( 2) 00:12:46.380 5.001 - 5.025: 98.2127% ( 1) 00:12:46.380 5.025 - 5.049: 98.2202% ( 1) 00:12:46.380 5.049 - 5.073: 98.2426% ( 3) 00:12:46.380 5.073 - 5.096: 98.2650% ( 3) 00:12:46.380 5.096 - 5.120: 98.2875% ( 3) 00:12:46.380 5.120 - 5.144: 98.3099% ( 3) 00:12:46.380 5.144 - 5.167: 98.3174% ( 1) 00:12:46.380 5.167 - 5.191: 98.3323% ( 2) 00:12:46.380 5.191 - 5.215: 98.3548% ( 3) 00:12:46.380 5.215 - 5.239: 98.3697% ( 2) 00:12:46.380 5.239 - 5.262: 98.3847% ( 2) 00:12:46.380 5.286 - 5.310: 98.4071% ( 3) 00:12:46.380 5.310 - 5.333: 98.4221% ( 2) 00:12:46.380 5.381 - 5.404: 98.4296% ( 1) 00:12:46.380 5.499 - 5.523: 98.4370% ( 1) 00:12:46.380 5.547 - 5.570: 98.4445% ( 1) 00:12:46.380 5.713 - 5.736: 98.4669% ( 3) 00:12:46.381 5.736 - 5.760: 98.4744% ( 1) 00:12:46.381 5.760 - 5.784: 98.4819% ( 1) 00:12:46.381 5.831 - 5.855: 98.4894% ( 1) 00:12:46.381 5.855 - 5.879: 98.5043% ( 2) 00:12:46.381 5.902 - 5.926: 98.5193% ( 2) 00:12:46.381 5.950 - 5.973: 98.5268% ( 1) 00:12:46.381 5.973 - 5.997: 98.5343% ( 1) 00:12:46.381 5.997 - 6.021: 98.5417% ( 1) 00:12:46.381 6.068 - 6.116: 98.5642% ( 3) 00:12:46.381 6.116 - 6.163: 98.5716% ( 1) 00:12:46.381 6.163 - 6.210: 98.5866% ( 2) 00:12:46.381 6.305 - 6.353: 98.5941% ( 1) 00:12:46.381 6.400 - 6.447: 98.6165% ( 3) 00:12:46.381 6.542 - 6.590: 98.6315% ( 2) 00:12:46.381 6.684 - 6.732: 98.6389% ( 1) 00:12:46.381 6.779 - 6.827: 98.6464% ( 1) 00:12:46.381 6.921 - 6.969: 98.6539% ( 1) 00:12:46.381 6.969 - 7.016: 98.6614% ( 1) 00:12:46.381 7.016 - 7.064: 98.6689% ( 1) 00:12:46.381 7.301 - 7.348: 98.6838% ( 2) 00:12:46.381 7.396 - 7.443: 98.6913% ( 1) 00:12:46.381 7.443 - 7.490: 98.6988% ( 1) 00:12:46.381 7.727 - 7.775: 98.7063% ( 1) 00:12:46.381 7.822 - 7.870: 98.7137% ( 1) 00:12:46.381 7.870 - 7.917: 98.7212% ( 1) 00:12:46.381 7.917 - 7.964: 98.7436% ( 3) 00:12:46.381 8.012 - 8.059: 98.7511% ( 1) 00:12:46.381 8.059 - 8.107: 98.7661% ( 2) 00:12:46.381 8.201 - 8.249: 98.7736% ( 1) 00:12:46.381 8.344 - 8.391: 98.7810% ( 1) 00:12:46.381 8.486 - 8.533: 98.7885% ( 1) 00:12:46.381 8.628 - 8.676: 98.7960% ( 1) 00:12:46.381 8.676 - 8.723: 98.8035% ( 1) 00:12:46.381 8.818 - 8.865: 98.8259% ( 3) 00:12:46.381 8.865 - 8.913: 98.8334% ( 1) 00:12:46.381 8.913 - 8.960: 98.8409% ( 1) 00:12:46.381 9.055 - 9.102: 98.8483% ( 1) 00:12:46.381 9.244 - 9.292: 98.8558% ( 1) 00:12:46.381 9.387 - 9.434: 98.8633% ( 1) 00:12:46.381 9.529 - 9.576: 98.8783% ( 2) 00:12:46.381 9.576 - 9.624: 98.8857% ( 1) 00:12:46.381 9.624 - 9.671: 98.8932% ( 1) 00:12:46.381 9.671 - 9.719: 98.9156% ( 3) 00:12:46.381 9.766 - 9.813: 98.9231% ( 1) 00:12:46.381 9.861 - 9.908: 98.9306% ( 1) 00:12:46.381 10.050 - 10.098: 98.9381% ( 1) 00:12:46.381 10.098 - 10.145: 98.9530% ( 2) 00:12:46.381 10.287 - 10.335: 98.9680% ( 2) 00:12:46.381 10.335 - 10.382: 98.9904% ( 3) 00:12:46.381 10.524 - 10.572: 98.9979% ( 1) 00:12:46.381 10.572 - 10.619: 99.0054% ( 1) 00:12:46.381 10.619 - 10.667: 99.0129% ( 1) 00:12:46.381 10.667 - 10.714: 99.0203% ( 1) 00:12:46.381 11.188 - 11.236: 99.0278% ( 1) 00:12:46.381 11.283 - 11.330: 99.0353% ( 1) 00:12:46.381 11.520 - 11.567: 99.0428% ( 1) 00:12:46.381 11.710 - 11.757: 99.0503% ( 1) 00:12:46.381 11.947 - 11.994: 99.0577% ( 1) 00:12:46.381 12.041 - 12.089: 99.0727% ( 2) 00:12:46.381 12.136 - 12.231: 99.0802% ( 1) 00:12:46.381 12.326 - 12.421: 99.0876% ( 1) 00:12:46.381 12.421 - 12.516: 99.1026% ( 2) 00:12:46.381 12.516 - 12.610: 99.1101% ( 1) 00:12:46.381 12.705 - 12.800: 99.1250% ( 2) 00:12:46.381 13.084 - 13.179: 99.1400% ( 2) 00:12:46.381 13.559 - 13.653: 99.1475% ( 1) 00:12:46.381 13.748 - 13.843: 99.1624% ( 2) 00:12:46.381 13.938 - 14.033: 99.1699% ( 1) 00:12:46.381 14.127 - 14.222: 99.1774% ( 1) 00:12:46.381 14.222 - 14.317: 99.1849% ( 1) 00:12:46.381 14.412 - 14.507: 99.1923% ( 1) 00:12:46.381 14.981 - 15.076: 99.2148% ( 3) 00:12:46.381 15.265 - 15.360: 99.2223% ( 1) 00:12:46.381 16.972 - 17.067: 99.2297% ( 1) 00:12:46.381 17.067 - 17.161: 99.2447% ( 2) 00:12:46.381 17.161 - 17.256: 99.2522% ( 1) 00:12:46.381 17.256 - 17.351: 99.2671% ( 2) 00:12:46.381 17.351 - 17.446: 99.2970% ( 4) 00:12:46.381 17.446 - 17.541: 99.3195% ( 3) 00:12:46.381 17.541 - 17.636: 99.3344% ( 2) 00:12:46.381 17.636 - 17.730: 99.3643% ( 4) 00:12:46.381 17.730 - 17.825: 99.4017% ( 5) 00:12:46.381 17.825 - 17.920: 99.4242% ( 3) 00:12:46.381 17.920 - 18.015: 99.4765% ( 7) 00:12:46.381 18.015 - 18.110: 99.4990% ( 3) 00:12:46.381 18.110 - 18.204: 99.5588% ( 8) 00:12:46.381 18.204 - 18.299: 99.6111% ( 7) 00:12:46.381 18.299 - 18.394: 99.6635% ( 7) 00:12:46.381 18.394 - 18.489: 99.6859% ( 3) 00:12:46.381 18.489 - 18.584: 99.7083% ( 3) 00:12:46.381 18.584 - 18.679: 99.7308% ( 3) 00:12:46.381 18.679 - 18.773: 99.7682% ( 5) 00:12:46.381 18.773 - 18.868: 99.7757% ( 1) 00:12:46.381 18.868 - 18.963: 99.7831% ( 1) 00:12:46.381 18.963 - 19.058: 99.8056% ( 3) 00:12:46.381 19.058 - 19.153: 99.8205% ( 2) 00:12:46.381 19.153 - 19.247: 99.8355% ( 2) 00:12:46.381 19.247 - 19.342: 99.8504% ( 2) 00:12:46.381 19.437 - 19.532: 99.8579% ( 1) 00:12:46.381 19.627 - 19.721: 99.8654% ( 1) 00:12:46.381 19.911 - 20.006: 99.8729% ( 1) 00:12:46.381 20.670 - 20.764: 99.8803% ( 1) 00:12:46.381 21.997 - 22.092: 99.8878% ( 1) 00:12:46.381 22.566 - 22.661: 99.8953% ( 1) 00:12:46.381 23.893 - 23.988: 99.9028% ( 1) 00:12:46.381 25.790 - 25.979: 99.9103% ( 1) 00:12:46.381 28.255 - 28.444: 99.9177% ( 1) 00:12:46.381 28.634 - 28.824: 99.9252% ( 1) 00:12:46.381 3835.070 - 3859.342: 99.9327% ( 1) 00:12:46.381 3980.705 - 4004.978: 99.9701% ( 5) 00:12:46.381 4004.978 - 4029.250: 100.0000% ( 4) 00:12:46.381 00:12:46.381 Complete histogram 00:12:46.381 ================== 00:12:46.381 Range in us Cumulative Count 00:12:46.381 2.062 - 2.074: 2.6249% ( 351) 00:12:46.381 2.074 - 2.086: 37.7131% ( 4692) 00:12:46.381 2.086 - 2.098: 48.0781% ( 1386) 00:12:46.381 2.098 - 2.110: 51.7948% ( 497) 00:12:46.381 2.110 - 2.121: 61.4343% ( 1289) 00:12:46.381 2.121 - 2.133: 63.5432% ( 282) 00:12:46.381 2.133 - 2.145: 68.6733% ( 686) 00:12:46.381 2.145 - 2.157: 79.9581% ( 1509) 00:12:46.381 2.157 - 2.169: 81.7679% ( 242) 00:12:46.381 2.169 - 2.181: 84.2507% ( 332) 00:12:46.381 2.181 - 2.193: 87.7879% ( 473) 00:12:46.381 2.193 - 2.204: 88.5881% ( 107) 00:12:46.381 2.204 - 2.216: 89.4705% ( 118) 00:12:46.381 2.216 - 2.228: 92.1627% ( 360) 00:12:46.381 2.228 - 2.240: 93.9874% ( 244) 00:12:46.381 2.240 - 2.252: 94.5334% ( 73) 00:12:46.381 2.252 - 2.264: 95.0344% ( 67) 00:12:46.381 2.264 - 2.276: 95.1466% ( 15) 00:12:46.381 2.276 - 2.287: 95.2887% ( 19) 00:12:46.381 2.287 - 2.299: 95.5728% ( 38) 00:12:46.381 2.299 - 2.311: 95.9542% ( 51) 00:12:46.381 2.311 - 2.323: 96.0589% ( 14) 00:12:46.381 2.323 - 2.335: 96.0739% ( 2) 00:12:46.381 2.335 - 2.347: 96.1188% ( 6) 00:12:46.381 2.347 - 2.359: 96.2459% ( 17) 00:12:46.381 2.359 - 2.370: 96.5076% ( 35) 00:12:46.381 2.370 - 2.382: 96.8292% ( 43) 00:12:46.381 2.382 - 2.394: 97.2405% ( 55) 00:12:46.381 2.394 - 2.406: 97.5995% ( 48) 00:12:46.381 2.406 - 2.418: 97.8612% ( 35) 00:12:46.381 2.418 - 2.430: 98.0033% ( 19) 00:12:46.381 2.430 - 2.441: 98.1155% ( 15) 00:12:46.381 2.441 - 2.453: 98.1977% ( 11) 00:12:46.381 2.453 - 2.465: 98.2650% ( 9) 00:12:46.381 2.465 - 2.477: 98.3099% ( 6) 00:12:46.381 2.477 - 2.489: 98.3398% ( 4) 00:12:46.381 2.489 - 2.501: 98.3548% ( 2) 00:12:46.381 2.501 - 2.513: 98.3847% ( 4) 00:12:46.381 2.524 - 2.536: 98.3922% ( 1) 00:12:46.381 2.548 - 2.560: 98.3996% ( 1) 00:12:46.381 2.572 - 2.584: 98.4071% ( 1) 00:12:46.381 2.607 - 2.619: 98.4146% ( 1) 00:12:46.381 2.643 - 2.655: 98.4221% ( 1) 00:12:46.381 2.690 - 2.702: 98.4370% ( 2) 00:12:46.381 2.702 - 2.714: 98.4445% ( 1) 00:12:46.381 2.714 - 2.726: 98.4520% ( 1) 00:12:46.381 2.726 - 2.738: 98.4744% ( 3) 00:12:46.381 3.129 - 3.153: 98.4819% ( 1) 00:12:46.381 3.319 - 3.342: 98.5043% ( 3) 00:12:46.381 3.342 - 3.366: 98.5193% ( 2) 00:12:46.381 3.366 - 3.390: 98.5268% ( 1) 00:12:46.381 3.390 - 3.413: 98.5343% ( 1) 00:12:46.381 3.413 - 3.437: 98.5417% ( 1) 00:12:46.381 3.437 - 3.461: 98.5492% ( 1) 00:12:46.381 3.484 - 3.508: 98.5567% ( 1) 00:12:46.381 3.508 - 3.532: 98.5716% ( 2) 00:12:46.381 3.556 - 3.579: 98.5866% ( 2) 00:12:46.381 3.603 - 3.627: 98.5941% ( 1) 00:12:46.381 3.627 - 3.650: 98.6090% ( 2) 00:12:46.381 3.864 - 3.887: 98.6165% ( 1) 00:12:46.381 3.887 - 3.911: 98.6240% ( 1) 00:12:46.381 3.959 - 3.982: 9[2024-07-26 12:13:39.224489] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.381 8.6315% ( 1) 00:12:46.382 5.404 - 5.428: 98.6389% ( 1) 00:12:46.382 5.476 - 5.499: 98.6464% ( 1) 00:12:46.382 5.855 - 5.879: 98.6539% ( 1) 00:12:46.382 5.997 - 6.021: 98.6614% ( 1) 00:12:46.382 6.163 - 6.210: 98.6689% ( 1) 00:12:46.382 6.258 - 6.305: 98.6838% ( 2) 00:12:46.382 6.400 - 6.447: 98.6913% ( 1) 00:12:46.382 6.590 - 6.637: 98.6988% ( 1) 00:12:46.382 6.684 - 6.732: 98.7063% ( 1) 00:12:46.382 7.206 - 7.253: 98.7137% ( 1) 00:12:46.382 7.396 - 7.443: 98.7212% ( 1) 00:12:46.382 7.585 - 7.633: 98.7287% ( 1) 00:12:46.382 7.822 - 7.870: 98.7362% ( 1) 00:12:46.382 8.012 - 8.059: 98.7436% ( 1) 00:12:46.382 8.059 - 8.107: 98.7511% ( 1) 00:12:46.382 8.154 - 8.201: 98.7586% ( 1) 00:12:46.382 8.486 - 8.533: 98.7661% ( 1) 00:12:46.382 9.055 - 9.102: 98.7736% ( 1) 00:12:46.382 10.714 - 10.761: 98.7810% ( 1) 00:12:46.382 13.464 - 13.559: 98.7885% ( 1) 00:12:46.382 15.550 - 15.644: 98.7960% ( 1) 00:12:46.382 15.644 - 15.739: 98.8184% ( 3) 00:12:46.382 15.834 - 15.929: 98.8483% ( 4) 00:12:46.382 15.929 - 16.024: 98.8558% ( 1) 00:12:46.382 16.024 - 16.119: 98.8932% ( 5) 00:12:46.382 16.119 - 16.213: 98.9082% ( 2) 00:12:46.382 16.213 - 16.308: 98.9231% ( 2) 00:12:46.382 16.308 - 16.403: 98.9605% ( 5) 00:12:46.382 16.403 - 16.498: 99.0203% ( 8) 00:12:46.382 16.498 - 16.593: 99.0951% ( 10) 00:12:46.382 16.593 - 16.687: 99.1475% ( 7) 00:12:46.382 16.687 - 16.782: 99.1699% ( 3) 00:12:46.382 16.782 - 16.877: 99.1774% ( 1) 00:12:46.382 16.877 - 16.972: 99.1923% ( 2) 00:12:46.382 16.972 - 17.067: 99.2148% ( 3) 00:12:46.382 17.067 - 17.161: 99.2372% ( 3) 00:12:46.382 17.161 - 17.256: 99.2522% ( 2) 00:12:46.382 17.351 - 17.446: 99.2821% ( 4) 00:12:46.382 17.446 - 17.541: 99.2970% ( 2) 00:12:46.382 17.541 - 17.636: 99.3045% ( 1) 00:12:46.382 17.730 - 17.825: 99.3344% ( 4) 00:12:46.382 17.920 - 18.015: 99.3494% ( 2) 00:12:46.382 18.015 - 18.110: 99.3643% ( 2) 00:12:46.382 18.110 - 18.204: 99.3868% ( 3) 00:12:46.382 18.299 - 18.394: 99.3943% ( 1) 00:12:46.382 21.049 - 21.144: 99.4017% ( 1) 00:12:46.382 21.333 - 21.428: 99.4092% ( 1) 00:12:46.382 25.790 - 25.979: 99.4167% ( 1) 00:12:46.382 3980.705 - 4004.978: 99.8130% ( 53) 00:12:46.382 4004.978 - 4029.250: 100.0000% ( 25) 00:12:46.382 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:46.382 [ 00:12:46.382 { 00:12:46.382 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:46.382 "subtype": "Discovery", 00:12:46.382 "listen_addresses": [], 00:12:46.382 "allow_any_host": true, 00:12:46.382 "hosts": [] 00:12:46.382 }, 00:12:46.382 { 00:12:46.382 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:46.382 "subtype": "NVMe", 00:12:46.382 "listen_addresses": [ 00:12:46.382 { 00:12:46.382 "trtype": "VFIOUSER", 00:12:46.382 "adrfam": "IPv4", 00:12:46.382 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:46.382 "trsvcid": "0" 00:12:46.382 } 00:12:46.382 ], 00:12:46.382 "allow_any_host": true, 00:12:46.382 "hosts": [], 00:12:46.382 "serial_number": "SPDK1", 00:12:46.382 "model_number": "SPDK bdev Controller", 00:12:46.382 "max_namespaces": 32, 00:12:46.382 "min_cntlid": 1, 00:12:46.382 "max_cntlid": 65519, 00:12:46.382 "namespaces": [ 00:12:46.382 { 00:12:46.382 "nsid": 1, 00:12:46.382 "bdev_name": "Malloc1", 00:12:46.382 "name": "Malloc1", 00:12:46.382 "nguid": "23F04DC1B44E4EFF94A553C9C449C6B5", 00:12:46.382 "uuid": "23f04dc1-b44e-4eff-94a5-53c9c449c6b5" 00:12:46.382 } 00:12:46.382 ] 00:12:46.382 }, 00:12:46.382 { 00:12:46.382 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:46.382 "subtype": "NVMe", 00:12:46.382 "listen_addresses": [ 00:12:46.382 { 00:12:46.382 "trtype": "VFIOUSER", 00:12:46.382 "adrfam": "IPv4", 00:12:46.382 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:46.382 "trsvcid": "0" 00:12:46.382 } 00:12:46.382 ], 00:12:46.382 "allow_any_host": true, 00:12:46.382 "hosts": [], 00:12:46.382 "serial_number": "SPDK2", 00:12:46.382 "model_number": "SPDK bdev Controller", 00:12:46.382 "max_namespaces": 32, 00:12:46.382 "min_cntlid": 1, 00:12:46.382 "max_cntlid": 65519, 00:12:46.382 "namespaces": [ 00:12:46.382 { 00:12:46.382 "nsid": 1, 00:12:46.382 "bdev_name": "Malloc2", 00:12:46.382 "name": "Malloc2", 00:12:46.382 "nguid": "3FD908165BC6451AA5FA411757B933A5", 00:12:46.382 "uuid": "3fd90816-5bc6-451a-a5fa-411757b933a5" 00:12:46.382 } 00:12:46.382 ] 00:12:46.382 } 00:12:46.382 ] 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2853436 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:12:46.382 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:12:46.382 EAL: No free 2048 kB hugepages reported on node 1 00:12:46.640 [2024-07-26 12:13:39.683530] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:12:46.640 Malloc3 00:12:46.640 12:13:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:12:46.898 [2024-07-26 12:13:40.052237] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:12:46.898 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:12:46.898 Asynchronous Event Request test 00:12:46.898 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.898 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:12:46.898 Registering asynchronous event callbacks... 00:12:46.898 Starting namespace attribute notice tests for all controllers... 00:12:46.898 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:12:46.898 aer_cb - Changed Namespace 00:12:46.898 Cleaning up... 00:12:47.157 [ 00:12:47.157 { 00:12:47.157 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:47.157 "subtype": "Discovery", 00:12:47.157 "listen_addresses": [], 00:12:47.157 "allow_any_host": true, 00:12:47.157 "hosts": [] 00:12:47.157 }, 00:12:47.157 { 00:12:47.157 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:12:47.157 "subtype": "NVMe", 00:12:47.157 "listen_addresses": [ 00:12:47.157 { 00:12:47.157 "trtype": "VFIOUSER", 00:12:47.157 "adrfam": "IPv4", 00:12:47.157 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:12:47.157 "trsvcid": "0" 00:12:47.157 } 00:12:47.157 ], 00:12:47.157 "allow_any_host": true, 00:12:47.157 "hosts": [], 00:12:47.157 "serial_number": "SPDK1", 00:12:47.157 "model_number": "SPDK bdev Controller", 00:12:47.157 "max_namespaces": 32, 00:12:47.157 "min_cntlid": 1, 00:12:47.157 "max_cntlid": 65519, 00:12:47.157 "namespaces": [ 00:12:47.157 { 00:12:47.157 "nsid": 1, 00:12:47.157 "bdev_name": "Malloc1", 00:12:47.157 "name": "Malloc1", 00:12:47.157 "nguid": "23F04DC1B44E4EFF94A553C9C449C6B5", 00:12:47.157 "uuid": "23f04dc1-b44e-4eff-94a5-53c9c449c6b5" 00:12:47.157 }, 00:12:47.157 { 00:12:47.157 "nsid": 2, 00:12:47.157 "bdev_name": "Malloc3", 00:12:47.157 "name": "Malloc3", 00:12:47.157 "nguid": "93F7C37218CD401795BEC8533A040136", 00:12:47.157 "uuid": "93f7c372-18cd-4017-95be-c8533a040136" 00:12:47.157 } 00:12:47.157 ] 00:12:47.157 }, 00:12:47.157 { 00:12:47.157 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:12:47.157 "subtype": "NVMe", 00:12:47.157 "listen_addresses": [ 00:12:47.157 { 00:12:47.157 "trtype": "VFIOUSER", 00:12:47.157 "adrfam": "IPv4", 00:12:47.157 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:12:47.157 "trsvcid": "0" 00:12:47.157 } 00:12:47.157 ], 00:12:47.157 "allow_any_host": true, 00:12:47.157 "hosts": [], 00:12:47.157 "serial_number": "SPDK2", 00:12:47.157 "model_number": "SPDK bdev Controller", 00:12:47.157 "max_namespaces": 32, 00:12:47.157 "min_cntlid": 1, 00:12:47.157 "max_cntlid": 65519, 00:12:47.157 "namespaces": [ 00:12:47.157 { 00:12:47.157 "nsid": 1, 00:12:47.157 "bdev_name": "Malloc2", 00:12:47.157 "name": "Malloc2", 00:12:47.157 "nguid": "3FD908165BC6451AA5FA411757B933A5", 00:12:47.157 "uuid": "3fd90816-5bc6-451a-a5fa-411757b933a5" 00:12:47.157 } 00:12:47.157 ] 00:12:47.157 } 00:12:47.157 ] 00:12:47.157 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2853436 00:12:47.157 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:12:47.157 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:12:47.157 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:12:47.157 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:12:47.157 [2024-07-26 12:13:40.331135] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:12:47.157 [2024-07-26 12:13:40.331178] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2853455 ] 00:12:47.157 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.157 [2024-07-26 12:13:40.366489] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:12:47.157 [2024-07-26 12:13:40.374397] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:47.157 [2024-07-26 12:13:40.374426] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f2f27d6f000 00:12:47.157 [2024-07-26 12:13:40.375395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.376396] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.377395] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.378411] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.379418] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.380420] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.381424] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.382429] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:12:47.157 [2024-07-26 12:13:40.383441] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:12:47.157 [2024-07-26 12:13:40.383463] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f2f27d64000 00:12:47.157 [2024-07-26 12:13:40.384609] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.157 [2024-07-26 12:13:40.400776] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:12:47.157 [2024-07-26 12:13:40.400810] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:12:47.157 [2024-07-26 12:13:40.402892] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:47.157 [2024-07-26 12:13:40.402946] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:12:47.157 [2024-07-26 12:13:40.403039] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:12:47.157 [2024-07-26 12:13:40.403092] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:12:47.157 [2024-07-26 12:13:40.403105] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:12:47.158 [2024-07-26 12:13:40.405077] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:12:47.158 [2024-07-26 12:13:40.405106] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:12:47.158 [2024-07-26 12:13:40.405136] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:12:47.158 [2024-07-26 12:13:40.405905] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:12:47.158 [2024-07-26 12:13:40.405925] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:12:47.158 [2024-07-26 12:13:40.405939] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:12:47.158 [2024-07-26 12:13:40.406910] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:12:47.158 [2024-07-26 12:13:40.406931] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:12:47.158 [2024-07-26 12:13:40.407921] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:12:47.158 [2024-07-26 12:13:40.407961] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:12:47.158 [2024-07-26 12:13:40.407972] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:12:47.158 [2024-07-26 12:13:40.407984] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:12:47.158 [2024-07-26 12:13:40.408095] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:12:47.158 [2024-07-26 12:13:40.408106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:12:47.158 [2024-07-26 12:13:40.408115] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:12:47.158 [2024-07-26 12:13:40.408932] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:12:47.418 [2024-07-26 12:13:40.409942] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:12:47.418 [2024-07-26 12:13:40.410953] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:47.418 [2024-07-26 12:13:40.411949] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:47.418 [2024-07-26 12:13:40.412032] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:12:47.418 [2024-07-26 12:13:40.412961] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:12:47.418 [2024-07-26 12:13:40.412982] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:12:47.418 [2024-07-26 12:13:40.412992] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:12:47.418 [2024-07-26 12:13:40.413015] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:12:47.418 [2024-07-26 12:13:40.413029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:12:47.418 [2024-07-26 12:13:40.413078] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.418 [2024-07-26 12:13:40.413091] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.418 [2024-07-26 12:13:40.413098] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.418 [2024-07-26 12:13:40.413120] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.418 [2024-07-26 12:13:40.420073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:12:47.418 [2024-07-26 12:13:40.420099] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:12:47.418 [2024-07-26 12:13:40.420108] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:12:47.418 [2024-07-26 12:13:40.420116] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:12:47.418 [2024-07-26 12:13:40.420124] nvme_ctrlr.c:2075:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:12:47.418 [2024-07-26 12:13:40.420137] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:12:47.418 [2024-07-26 12:13:40.420147] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:12:47.418 [2024-07-26 12:13:40.420155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:12:47.418 [2024-07-26 12:13:40.420169] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:12:47.418 [2024-07-26 12:13:40.420190] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:12:47.418 [2024-07-26 12:13:40.428071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:12:47.418 [2024-07-26 12:13:40.428102] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.418 [2024-07-26 12:13:40.428117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.418 [2024-07-26 12:13:40.428130] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.418 [2024-07-26 12:13:40.428142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:12:47.418 [2024-07-26 12:13:40.428151] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.428167] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.428183] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.436071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.436090] nvme_ctrlr.c:3014:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:12:47.419 [2024-07-26 12:13:40.436100] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.436117] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.436128] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.436143] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.444080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.444157] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.444174] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.444189] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:12:47.419 [2024-07-26 12:13:40.444197] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:12:47.419 [2024-07-26 12:13:40.444203] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.419 [2024-07-26 12:13:40.444217] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.452088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.452125] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:12:47.419 [2024-07-26 12:13:40.452142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.452158] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.452171] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.419 [2024-07-26 12:13:40.452180] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.419 [2024-07-26 12:13:40.452186] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.419 [2024-07-26 12:13:40.452195] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.460087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.460118] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.460135] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.460149] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:12:47.419 [2024-07-26 12:13:40.460157] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.419 [2024-07-26 12:13:40.460163] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.419 [2024-07-26 12:13:40.460173] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.468082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.468106] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468125] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468142] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468155] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468164] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468182] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:12:47.419 [2024-07-26 12:13:40.468189] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:12:47.419 [2024-07-26 12:13:40.468201] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:12:47.419 [2024-07-26 12:13:40.468230] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.476083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.476125] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.484086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.484112] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.492068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.492094] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:12:47.419 [2024-07-26 12:13:40.500070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:12:47.419 [2024-07-26 12:13:40.500102] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:12:47.419 [2024-07-26 12:13:40.500113] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:12:47.419 [2024-07-26 12:13:40.500120] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:12:47.419 [2024-07-26 12:13:40.500126] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:12:47.420 [2024-07-26 12:13:40.500132] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:12:47.420 [2024-07-26 12:13:40.500141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:12:47.420 [2024-07-26 12:13:40.500153] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:12:47.420 [2024-07-26 12:13:40.500162] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:12:47.420 [2024-07-26 12:13:40.500168] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.420 [2024-07-26 12:13:40.500176] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:12:47.420 [2024-07-26 12:13:40.500187] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:12:47.420 [2024-07-26 12:13:40.500195] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:12:47.420 [2024-07-26 12:13:40.500201] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.420 [2024-07-26 12:13:40.500210] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:12:47.420 [2024-07-26 12:13:40.500222] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:12:47.420 [2024-07-26 12:13:40.500230] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:12:47.420 [2024-07-26 12:13:40.500236] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:12:47.420 [2024-07-26 12:13:40.500245] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:12:47.420 [2024-07-26 12:13:40.508068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:12:47.420 [2024-07-26 12:13:40.508096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:12:47.420 [2024-07-26 12:13:40.508117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:12:47.420 [2024-07-26 12:13:40.508130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:12:47.420 ===================================================== 00:12:47.420 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:47.420 ===================================================== 00:12:47.420 Controller Capabilities/Features 00:12:47.420 ================================ 00:12:47.420 Vendor ID: 4e58 00:12:47.420 Subsystem Vendor ID: 4e58 00:12:47.420 Serial Number: SPDK2 00:12:47.420 Model Number: SPDK bdev Controller 00:12:47.420 Firmware Version: 24.09 00:12:47.420 Recommended Arb Burst: 6 00:12:47.420 IEEE OUI Identifier: 8d 6b 50 00:12:47.420 Multi-path I/O 00:12:47.420 May have multiple subsystem ports: Yes 00:12:47.420 May have multiple controllers: Yes 00:12:47.420 Associated with SR-IOV VF: No 00:12:47.420 Max Data Transfer Size: 131072 00:12:47.420 Max Number of Namespaces: 32 00:12:47.420 Max Number of I/O Queues: 127 00:12:47.420 NVMe Specification Version (VS): 1.3 00:12:47.420 NVMe Specification Version (Identify): 1.3 00:12:47.420 Maximum Queue Entries: 256 00:12:47.420 Contiguous Queues Required: Yes 00:12:47.420 Arbitration Mechanisms Supported 00:12:47.420 Weighted Round Robin: Not Supported 00:12:47.420 Vendor Specific: Not Supported 00:12:47.420 Reset Timeout: 15000 ms 00:12:47.420 Doorbell Stride: 4 bytes 00:12:47.420 NVM Subsystem Reset: Not Supported 00:12:47.420 Command Sets Supported 00:12:47.420 NVM Command Set: Supported 00:12:47.420 Boot Partition: Not Supported 00:12:47.420 Memory Page Size Minimum: 4096 bytes 00:12:47.420 Memory Page Size Maximum: 4096 bytes 00:12:47.420 Persistent Memory Region: Not Supported 00:12:47.420 Optional Asynchronous Events Supported 00:12:47.420 Namespace Attribute Notices: Supported 00:12:47.420 Firmware Activation Notices: Not Supported 00:12:47.420 ANA Change Notices: Not Supported 00:12:47.420 PLE Aggregate Log Change Notices: Not Supported 00:12:47.420 LBA Status Info Alert Notices: Not Supported 00:12:47.420 EGE Aggregate Log Change Notices: Not Supported 00:12:47.420 Normal NVM Subsystem Shutdown event: Not Supported 00:12:47.420 Zone Descriptor Change Notices: Not Supported 00:12:47.420 Discovery Log Change Notices: Not Supported 00:12:47.420 Controller Attributes 00:12:47.420 128-bit Host Identifier: Supported 00:12:47.420 Non-Operational Permissive Mode: Not Supported 00:12:47.420 NVM Sets: Not Supported 00:12:47.420 Read Recovery Levels: Not Supported 00:12:47.420 Endurance Groups: Not Supported 00:12:47.420 Predictable Latency Mode: Not Supported 00:12:47.420 Traffic Based Keep ALive: Not Supported 00:12:47.420 Namespace Granularity: Not Supported 00:12:47.420 SQ Associations: Not Supported 00:12:47.420 UUID List: Not Supported 00:12:47.420 Multi-Domain Subsystem: Not Supported 00:12:47.420 Fixed Capacity Management: Not Supported 00:12:47.420 Variable Capacity Management: Not Supported 00:12:47.421 Delete Endurance Group: Not Supported 00:12:47.421 Delete NVM Set: Not Supported 00:12:47.421 Extended LBA Formats Supported: Not Supported 00:12:47.421 Flexible Data Placement Supported: Not Supported 00:12:47.421 00:12:47.421 Controller Memory Buffer Support 00:12:47.421 ================================ 00:12:47.421 Supported: No 00:12:47.421 00:12:47.421 Persistent Memory Region Support 00:12:47.421 ================================ 00:12:47.421 Supported: No 00:12:47.421 00:12:47.421 Admin Command Set Attributes 00:12:47.421 ============================ 00:12:47.421 Security Send/Receive: Not Supported 00:12:47.421 Format NVM: Not Supported 00:12:47.421 Firmware Activate/Download: Not Supported 00:12:47.421 Namespace Management: Not Supported 00:12:47.421 Device Self-Test: Not Supported 00:12:47.421 Directives: Not Supported 00:12:47.421 NVMe-MI: Not Supported 00:12:47.421 Virtualization Management: Not Supported 00:12:47.421 Doorbell Buffer Config: Not Supported 00:12:47.421 Get LBA Status Capability: Not Supported 00:12:47.421 Command & Feature Lockdown Capability: Not Supported 00:12:47.421 Abort Command Limit: 4 00:12:47.421 Async Event Request Limit: 4 00:12:47.421 Number of Firmware Slots: N/A 00:12:47.421 Firmware Slot 1 Read-Only: N/A 00:12:47.421 Firmware Activation Without Reset: N/A 00:12:47.421 Multiple Update Detection Support: N/A 00:12:47.421 Firmware Update Granularity: No Information Provided 00:12:47.421 Per-Namespace SMART Log: No 00:12:47.421 Asymmetric Namespace Access Log Page: Not Supported 00:12:47.421 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:12:47.421 Command Effects Log Page: Supported 00:12:47.421 Get Log Page Extended Data: Supported 00:12:47.421 Telemetry Log Pages: Not Supported 00:12:47.421 Persistent Event Log Pages: Not Supported 00:12:47.421 Supported Log Pages Log Page: May Support 00:12:47.421 Commands Supported & Effects Log Page: Not Supported 00:12:47.421 Feature Identifiers & Effects Log Page:May Support 00:12:47.421 NVMe-MI Commands & Effects Log Page: May Support 00:12:47.421 Data Area 4 for Telemetry Log: Not Supported 00:12:47.421 Error Log Page Entries Supported: 128 00:12:47.421 Keep Alive: Supported 00:12:47.421 Keep Alive Granularity: 10000 ms 00:12:47.421 00:12:47.421 NVM Command Set Attributes 00:12:47.421 ========================== 00:12:47.421 Submission Queue Entry Size 00:12:47.421 Max: 64 00:12:47.421 Min: 64 00:12:47.421 Completion Queue Entry Size 00:12:47.421 Max: 16 00:12:47.421 Min: 16 00:12:47.421 Number of Namespaces: 32 00:12:47.421 Compare Command: Supported 00:12:47.421 Write Uncorrectable Command: Not Supported 00:12:47.421 Dataset Management Command: Supported 00:12:47.421 Write Zeroes Command: Supported 00:12:47.421 Set Features Save Field: Not Supported 00:12:47.421 Reservations: Not Supported 00:12:47.421 Timestamp: Not Supported 00:12:47.421 Copy: Supported 00:12:47.421 Volatile Write Cache: Present 00:12:47.421 Atomic Write Unit (Normal): 1 00:12:47.421 Atomic Write Unit (PFail): 1 00:12:47.421 Atomic Compare & Write Unit: 1 00:12:47.421 Fused Compare & Write: Supported 00:12:47.421 Scatter-Gather List 00:12:47.421 SGL Command Set: Supported (Dword aligned) 00:12:47.421 SGL Keyed: Not Supported 00:12:47.421 SGL Bit Bucket Descriptor: Not Supported 00:12:47.421 SGL Metadata Pointer: Not Supported 00:12:47.421 Oversized SGL: Not Supported 00:12:47.421 SGL Metadata Address: Not Supported 00:12:47.421 SGL Offset: Not Supported 00:12:47.421 Transport SGL Data Block: Not Supported 00:12:47.421 Replay Protected Memory Block: Not Supported 00:12:47.421 00:12:47.421 Firmware Slot Information 00:12:47.421 ========================= 00:12:47.421 Active slot: 1 00:12:47.421 Slot 1 Firmware Revision: 24.09 00:12:47.421 00:12:47.421 00:12:47.421 Commands Supported and Effects 00:12:47.421 ============================== 00:12:47.421 Admin Commands 00:12:47.421 -------------- 00:12:47.422 Get Log Page (02h): Supported 00:12:47.422 Identify (06h): Supported 00:12:47.422 Abort (08h): Supported 00:12:47.422 Set Features (09h): Supported 00:12:47.422 Get Features (0Ah): Supported 00:12:47.422 Asynchronous Event Request (0Ch): Supported 00:12:47.422 Keep Alive (18h): Supported 00:12:47.422 I/O Commands 00:12:47.422 ------------ 00:12:47.422 Flush (00h): Supported LBA-Change 00:12:47.422 Write (01h): Supported LBA-Change 00:12:47.422 Read (02h): Supported 00:12:47.422 Compare (05h): Supported 00:12:47.422 Write Zeroes (08h): Supported LBA-Change 00:12:47.422 Dataset Management (09h): Supported LBA-Change 00:12:47.422 Copy (19h): Supported LBA-Change 00:12:47.422 00:12:47.422 Error Log 00:12:47.422 ========= 00:12:47.422 00:12:47.422 Arbitration 00:12:47.422 =========== 00:12:47.422 Arbitration Burst: 1 00:12:47.422 00:12:47.422 Power Management 00:12:47.422 ================ 00:12:47.422 Number of Power States: 1 00:12:47.422 Current Power State: Power State #0 00:12:47.422 Power State #0: 00:12:47.422 Max Power: 0.00 W 00:12:47.422 Non-Operational State: Operational 00:12:47.422 Entry Latency: Not Reported 00:12:47.422 Exit Latency: Not Reported 00:12:47.422 Relative Read Throughput: 0 00:12:47.422 Relative Read Latency: 0 00:12:47.422 Relative Write Throughput: 0 00:12:47.422 Relative Write Latency: 0 00:12:47.422 Idle Power: Not Reported 00:12:47.422 Active Power: Not Reported 00:12:47.422 Non-Operational Permissive Mode: Not Supported 00:12:47.422 00:12:47.422 Health Information 00:12:47.422 ================== 00:12:47.422 Critical Warnings: 00:12:47.422 Available Spare Space: OK 00:12:47.422 Temperature: OK 00:12:47.422 Device Reliability: OK 00:12:47.422 Read Only: No 00:12:47.422 Volatile Memory Backup: OK 00:12:47.422 Current Temperature: 0 Kelvin (-273 Celsius) 00:12:47.422 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:12:47.422 Available Spare: 0% 00:12:47.422 Available Sp[2024-07-26 12:13:40.508244] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:12:47.422 [2024-07-26 12:13:40.516071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:12:47.422 [2024-07-26 12:13:40.516121] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:12:47.422 [2024-07-26 12:13:40.516140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.422 [2024-07-26 12:13:40.516151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.422 [2024-07-26 12:13:40.516161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.422 [2024-07-26 12:13:40.516170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:12:47.422 [2024-07-26 12:13:40.516249] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:12:47.422 [2024-07-26 12:13:40.516271] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:12:47.422 [2024-07-26 12:13:40.517256] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:47.422 [2024-07-26 12:13:40.517329] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:12:47.422 [2024-07-26 12:13:40.517345] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:12:47.422 [2024-07-26 12:13:40.518263] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:12:47.422 [2024-07-26 12:13:40.518288] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:12:47.422 [2024-07-26 12:13:40.518343] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:12:47.422 [2024-07-26 12:13:40.521072] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:12:47.422 are Threshold: 0% 00:12:47.422 Life Percentage Used: 0% 00:12:47.422 Data Units Read: 0 00:12:47.422 Data Units Written: 0 00:12:47.422 Host Read Commands: 0 00:12:47.422 Host Write Commands: 0 00:12:47.422 Controller Busy Time: 0 minutes 00:12:47.422 Power Cycles: 0 00:12:47.422 Power On Hours: 0 hours 00:12:47.422 Unsafe Shutdowns: 0 00:12:47.422 Unrecoverable Media Errors: 0 00:12:47.422 Lifetime Error Log Entries: 0 00:12:47.422 Warning Temperature Time: 0 minutes 00:12:47.422 Critical Temperature Time: 0 minutes 00:12:47.422 00:12:47.422 Number of Queues 00:12:47.422 ================ 00:12:47.422 Number of I/O Submission Queues: 127 00:12:47.422 Number of I/O Completion Queues: 127 00:12:47.422 00:12:47.422 Active Namespaces 00:12:47.422 ================= 00:12:47.422 Namespace ID:1 00:12:47.422 Error Recovery Timeout: Unlimited 00:12:47.422 Command Set Identifier: NVM (00h) 00:12:47.422 Deallocate: Supported 00:12:47.422 Deallocated/Unwritten Error: Not Supported 00:12:47.422 Deallocated Read Value: Unknown 00:12:47.422 Deallocate in Write Zeroes: Not Supported 00:12:47.422 Deallocated Guard Field: 0xFFFF 00:12:47.422 Flush: Supported 00:12:47.422 Reservation: Supported 00:12:47.422 Namespace Sharing Capabilities: Multiple Controllers 00:12:47.422 Size (in LBAs): 131072 (0GiB) 00:12:47.422 Capacity (in LBAs): 131072 (0GiB) 00:12:47.422 Utilization (in LBAs): 131072 (0GiB) 00:12:47.422 NGUID: 3FD908165BC6451AA5FA411757B933A5 00:12:47.422 UUID: 3fd90816-5bc6-451a-a5fa-411757b933a5 00:12:47.422 Thin Provisioning: Not Supported 00:12:47.422 Per-NS Atomic Units: Yes 00:12:47.422 Atomic Boundary Size (Normal): 0 00:12:47.422 Atomic Boundary Size (PFail): 0 00:12:47.422 Atomic Boundary Offset: 0 00:12:47.422 Maximum Single Source Range Length: 65535 00:12:47.422 Maximum Copy Length: 65535 00:12:47.423 Maximum Source Range Count: 1 00:12:47.423 NGUID/EUI64 Never Reused: No 00:12:47.423 Namespace Write Protected: No 00:12:47.423 Number of LBA Formats: 1 00:12:47.423 Current LBA Format: LBA Format #00 00:12:47.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:12:47.423 00:12:47.423 12:13:40 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:12:47.423 EAL: No free 2048 kB hugepages reported on node 1 00:12:47.681 [2024-07-26 12:13:40.749959] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:52.960 Initializing NVMe Controllers 00:12:52.960 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:52.960 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:52.960 Initialization complete. Launching workers. 00:12:52.960 ======================================================== 00:12:52.960 Latency(us) 00:12:52.960 Device Information : IOPS MiB/s Average min max 00:12:52.960 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 33967.88 132.69 3767.95 1185.70 9889.61 00:12:52.960 ======================================================== 00:12:52.960 Total : 33967.88 132.69 3767.95 1185.70 9889.61 00:12:52.960 00:12:52.960 [2024-07-26 12:13:45.848478] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:52.960 12:13:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:12:52.960 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.960 [2024-07-26 12:13:46.091131] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:12:58.243 Initializing NVMe Controllers 00:12:58.243 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:12:58.243 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:12:58.243 Initialization complete. Launching workers. 00:12:58.243 ======================================================== 00:12:58.243 Latency(us) 00:12:58.243 Device Information : IOPS MiB/s Average min max 00:12:58.243 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 31704.15 123.84 4036.24 1215.56 8290.56 00:12:58.243 ======================================================== 00:12:58.243 Total : 31704.15 123.84 4036.24 1215.56 8290.56 00:12:58.243 00:12:58.243 [2024-07-26 12:13:51.109859] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:12:58.243 12:13:51 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:12:58.243 EAL: No free 2048 kB hugepages reported on node 1 00:12:58.243 [2024-07-26 12:13:51.322728] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:03.573 [2024-07-26 12:13:56.453534] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:03.573 Initializing NVMe Controllers 00:13:03.573 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.573 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:13:03.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:13:03.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:13:03.573 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:13:03.573 Initialization complete. Launching workers. 00:13:03.573 Starting thread on core 2 00:13:03.573 Starting thread on core 3 00:13:03.573 Starting thread on core 1 00:13:03.573 12:13:56 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:13:03.573 EAL: No free 2048 kB hugepages reported on node 1 00:13:03.573 [2024-07-26 12:13:56.763586] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:06.867 [2024-07-26 12:13:59.911359] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:06.867 Initializing NVMe Controllers 00:13:06.867 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:06.867 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:06.867 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:13:06.867 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:13:06.867 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:13:06.867 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:13:06.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:13:06.867 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:13:06.867 Initialization complete. Launching workers. 00:13:06.867 Starting thread on core 1 with urgent priority queue 00:13:06.867 Starting thread on core 2 with urgent priority queue 00:13:06.867 Starting thread on core 3 with urgent priority queue 00:13:06.867 Starting thread on core 0 with urgent priority queue 00:13:06.867 SPDK bdev Controller (SPDK2 ) core 0: 2411.00 IO/s 41.48 secs/100000 ios 00:13:06.867 SPDK bdev Controller (SPDK2 ) core 1: 2573.33 IO/s 38.86 secs/100000 ios 00:13:06.867 SPDK bdev Controller (SPDK2 ) core 2: 2719.67 IO/s 36.77 secs/100000 ios 00:13:06.867 SPDK bdev Controller (SPDK2 ) core 3: 2032.67 IO/s 49.20 secs/100000 ios 00:13:06.867 ======================================================== 00:13:06.867 00:13:06.867 12:13:59 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:06.867 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.126 [2024-07-26 12:14:00.211554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:07.126 Initializing NVMe Controllers 00:13:07.126 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.126 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:07.126 Namespace ID: 1 size: 0GB 00:13:07.126 Initialization complete. 00:13:07.126 INFO: using host memory buffer for IO 00:13:07.126 Hello world! 00:13:07.126 [2024-07-26 12:14:00.224774] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:07.126 12:14:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:13:07.126 EAL: No free 2048 kB hugepages reported on node 1 00:13:07.385 [2024-07-26 12:14:00.513811] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:08.759 Initializing NVMe Controllers 00:13:08.759 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:08.759 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:08.759 Initialization complete. Launching workers. 00:13:08.759 submit (in ns) avg, min, max = 8053.1, 3501.1, 4024642.2 00:13:08.759 complete (in ns) avg, min, max = 26591.6, 2053.3, 4016597.8 00:13:08.759 00:13:08.759 Submit histogram 00:13:08.759 ================ 00:13:08.759 Range in us Cumulative Count 00:13:08.759 3.484 - 3.508: 0.0594% ( 8) 00:13:08.759 3.508 - 3.532: 1.3819% ( 178) 00:13:08.759 3.532 - 3.556: 3.5215% ( 288) 00:13:08.759 3.556 - 3.579: 9.0193% ( 740) 00:13:08.759 3.579 - 3.603: 16.6196% ( 1023) 00:13:08.759 3.603 - 3.627: 27.7489% ( 1498) 00:13:08.759 3.627 - 3.650: 37.4740% ( 1309) 00:13:08.759 3.650 - 3.674: 45.3418% ( 1059) 00:13:08.759 3.674 - 3.698: 51.7979% ( 869) 00:13:08.759 3.698 - 3.721: 58.1575% ( 856) 00:13:08.759 3.721 - 3.745: 62.7786% ( 622) 00:13:08.759 3.745 - 3.769: 66.5007% ( 501) 00:13:08.759 3.769 - 3.793: 69.8737% ( 454) 00:13:08.759 3.793 - 3.816: 72.8529% ( 401) 00:13:08.759 3.816 - 3.840: 76.3001% ( 464) 00:13:08.759 3.840 - 3.864: 80.1189% ( 514) 00:13:08.759 3.864 - 3.887: 83.5958% ( 468) 00:13:08.759 3.887 - 3.911: 86.3299% ( 368) 00:13:08.759 3.911 - 3.935: 88.6181% ( 308) 00:13:08.759 3.935 - 3.959: 90.2303% ( 217) 00:13:08.759 3.959 - 3.982: 91.6196% ( 187) 00:13:08.759 3.982 - 4.006: 92.9866% ( 184) 00:13:08.759 4.006 - 4.030: 93.9525% ( 130) 00:13:08.759 4.030 - 4.053: 94.8217% ( 117) 00:13:08.759 4.053 - 4.077: 95.4458% ( 84) 00:13:08.759 4.077 - 4.101: 96.0253% ( 78) 00:13:08.759 4.101 - 4.124: 96.3150% ( 39) 00:13:08.759 4.124 - 4.148: 96.6345% ( 43) 00:13:08.759 4.148 - 4.172: 96.7905% ( 21) 00:13:08.759 4.172 - 4.196: 96.9539% ( 22) 00:13:08.759 4.196 - 4.219: 97.0877% ( 18) 00:13:08.759 4.219 - 4.243: 97.1694% ( 11) 00:13:08.759 4.243 - 4.267: 97.2808% ( 15) 00:13:08.759 4.267 - 4.290: 97.3551% ( 10) 00:13:08.759 4.290 - 4.314: 97.5111% ( 21) 00:13:08.759 4.314 - 4.338: 97.5706% ( 8) 00:13:08.759 4.338 - 4.361: 97.6077% ( 5) 00:13:08.759 4.361 - 4.385: 97.6300% ( 3) 00:13:08.759 4.385 - 4.409: 97.6597% ( 4) 00:13:08.759 4.409 - 4.433: 97.6895% ( 4) 00:13:08.759 4.433 - 4.456: 97.6969% ( 1) 00:13:08.759 4.480 - 4.504: 97.7043% ( 1) 00:13:08.759 4.504 - 4.527: 97.7266% ( 3) 00:13:08.759 4.575 - 4.599: 97.7340% ( 1) 00:13:08.759 4.599 - 4.622: 97.7415% ( 1) 00:13:08.759 4.622 - 4.646: 97.7489% ( 1) 00:13:08.759 4.670 - 4.693: 97.7712% ( 3) 00:13:08.759 4.693 - 4.717: 97.8232% ( 7) 00:13:08.759 4.717 - 4.741: 97.8678% ( 6) 00:13:08.759 4.741 - 4.764: 97.8826% ( 2) 00:13:08.759 4.764 - 4.788: 97.9049% ( 3) 00:13:08.759 4.788 - 4.812: 97.9718% ( 9) 00:13:08.759 4.812 - 4.836: 98.0386% ( 9) 00:13:08.759 4.836 - 4.859: 98.0981% ( 8) 00:13:08.759 4.859 - 4.883: 98.1204% ( 3) 00:13:08.759 4.883 - 4.907: 98.1649% ( 6) 00:13:08.759 4.907 - 4.930: 98.1947% ( 4) 00:13:08.759 4.930 - 4.954: 98.2318% ( 5) 00:13:08.759 4.954 - 4.978: 98.2689% ( 5) 00:13:08.759 4.978 - 5.001: 98.2912% ( 3) 00:13:08.759 5.001 - 5.025: 98.3061% ( 2) 00:13:08.759 5.025 - 5.049: 98.3432% ( 5) 00:13:08.759 5.049 - 5.073: 98.3655% ( 3) 00:13:08.759 5.073 - 5.096: 98.4101% ( 6) 00:13:08.759 5.096 - 5.120: 98.4324% ( 3) 00:13:08.759 5.120 - 5.144: 98.4473% ( 2) 00:13:08.759 5.144 - 5.167: 98.4770% ( 4) 00:13:08.759 5.167 - 5.191: 98.4918% ( 2) 00:13:08.759 5.191 - 5.215: 98.5067% ( 2) 00:13:08.759 5.215 - 5.239: 98.5141% ( 1) 00:13:08.759 5.239 - 5.262: 98.5215% ( 1) 00:13:08.759 5.262 - 5.286: 98.5290% ( 1) 00:13:08.759 5.286 - 5.310: 98.5364% ( 1) 00:13:08.759 5.310 - 5.333: 98.5438% ( 1) 00:13:08.759 5.404 - 5.428: 98.5513% ( 1) 00:13:08.759 5.452 - 5.476: 98.5587% ( 1) 00:13:08.759 5.713 - 5.736: 98.5661% ( 1) 00:13:08.759 6.021 - 6.044: 98.5736% ( 1) 00:13:08.759 6.495 - 6.542: 98.5810% ( 1) 00:13:08.759 6.542 - 6.590: 98.5884% ( 1) 00:13:08.759 6.732 - 6.779: 98.5958% ( 1) 00:13:08.759 6.779 - 6.827: 98.6033% ( 1) 00:13:08.759 6.874 - 6.921: 98.6107% ( 1) 00:13:08.759 6.921 - 6.969: 98.6181% ( 1) 00:13:08.759 7.348 - 7.396: 98.6330% ( 2) 00:13:08.759 7.396 - 7.443: 98.6404% ( 1) 00:13:08.759 7.443 - 7.490: 98.6478% ( 1) 00:13:08.759 7.870 - 7.917: 98.6701% ( 3) 00:13:08.759 7.917 - 7.964: 98.6776% ( 1) 00:13:08.759 7.964 - 8.012: 98.6850% ( 1) 00:13:08.759 8.012 - 8.059: 98.7073% ( 3) 00:13:08.759 8.107 - 8.154: 98.7147% ( 1) 00:13:08.759 8.154 - 8.201: 98.7221% ( 1) 00:13:08.759 8.201 - 8.249: 98.7370% ( 2) 00:13:08.759 8.296 - 8.344: 98.7593% ( 3) 00:13:08.759 8.391 - 8.439: 98.7741% ( 2) 00:13:08.759 8.439 - 8.486: 98.7816% ( 1) 00:13:08.759 8.486 - 8.533: 98.7890% ( 1) 00:13:08.759 8.533 - 8.581: 98.7964% ( 1) 00:13:08.759 8.581 - 8.628: 98.8113% ( 2) 00:13:08.759 8.628 - 8.676: 98.8187% ( 1) 00:13:08.759 8.676 - 8.723: 98.8262% ( 1) 00:13:08.759 8.723 - 8.770: 98.8410% ( 2) 00:13:08.759 8.770 - 8.818: 98.8559% ( 2) 00:13:08.759 8.818 - 8.865: 98.8633% ( 1) 00:13:08.759 8.913 - 8.960: 98.8782% ( 2) 00:13:08.759 9.007 - 9.055: 98.8856% ( 1) 00:13:08.759 9.055 - 9.102: 98.8930% ( 1) 00:13:08.759 9.197 - 9.244: 98.9004% ( 1) 00:13:08.759 9.244 - 9.292: 98.9153% ( 2) 00:13:08.759 9.292 - 9.339: 98.9227% ( 1) 00:13:08.759 9.339 - 9.387: 98.9302% ( 1) 00:13:08.759 9.624 - 9.671: 98.9376% ( 1) 00:13:08.759 9.671 - 9.719: 98.9450% ( 1) 00:13:08.759 10.335 - 10.382: 98.9525% ( 1) 00:13:08.759 10.856 - 10.904: 98.9673% ( 2) 00:13:08.759 10.951 - 10.999: 98.9747% ( 1) 00:13:08.759 11.046 - 11.093: 98.9822% ( 1) 00:13:08.759 11.283 - 11.330: 98.9896% ( 1) 00:13:08.759 11.567 - 11.615: 98.9970% ( 1) 00:13:08.759 11.615 - 11.662: 99.0045% ( 1) 00:13:08.759 11.662 - 11.710: 99.0193% ( 2) 00:13:08.759 11.899 - 11.947: 99.0267% ( 1) 00:13:08.759 11.994 - 12.041: 99.0342% ( 1) 00:13:08.759 12.326 - 12.421: 99.0416% ( 1) 00:13:08.759 12.421 - 12.516: 99.0490% ( 1) 00:13:08.759 12.800 - 12.895: 99.0639% ( 2) 00:13:08.759 13.084 - 13.179: 99.0788% ( 2) 00:13:08.759 13.653 - 13.748: 99.0862% ( 1) 00:13:08.759 14.222 - 14.317: 99.0936% ( 1) 00:13:08.759 14.507 - 14.601: 99.1010% ( 1) 00:13:08.759 14.696 - 14.791: 99.1085% ( 1) 00:13:08.759 17.067 - 17.161: 99.1159% ( 1) 00:13:08.759 17.161 - 17.256: 99.1233% ( 1) 00:13:08.759 17.256 - 17.351: 99.1456% ( 3) 00:13:08.759 17.446 - 17.541: 99.1605% ( 2) 00:13:08.759 17.541 - 17.636: 99.1753% ( 2) 00:13:08.759 17.636 - 17.730: 99.1976% ( 3) 00:13:08.759 17.730 - 17.825: 99.2571% ( 8) 00:13:08.759 17.825 - 17.920: 99.2868% ( 4) 00:13:08.759 17.920 - 18.015: 99.3685% ( 11) 00:13:08.759 18.015 - 18.110: 99.3759% ( 1) 00:13:08.759 18.110 - 18.204: 99.4205% ( 6) 00:13:08.759 18.204 - 18.299: 99.4948% ( 10) 00:13:08.759 18.299 - 18.394: 99.5542% ( 8) 00:13:08.759 18.394 - 18.489: 99.5914% ( 5) 00:13:08.759 18.489 - 18.584: 99.6657% ( 10) 00:13:08.759 18.584 - 18.679: 99.7474% ( 11) 00:13:08.759 18.679 - 18.773: 99.7623% ( 2) 00:13:08.759 18.773 - 18.868: 99.7771% ( 2) 00:13:08.759 18.868 - 18.963: 99.7920% ( 2) 00:13:08.759 19.058 - 19.153: 99.8068% ( 2) 00:13:08.759 19.153 - 19.247: 99.8143% ( 1) 00:13:08.759 19.532 - 19.627: 99.8217% ( 1) 00:13:08.759 19.627 - 19.721: 99.8291% ( 1) 00:13:08.759 20.101 - 20.196: 99.8366% ( 1) 00:13:08.759 21.144 - 21.239: 99.8440% ( 1) 00:13:08.759 22.187 - 22.281: 99.8514% ( 1) 00:13:08.759 22.471 - 22.566: 99.8588% ( 1) 00:13:08.759 22.945 - 23.040: 99.8663% ( 1) 00:13:08.759 23.324 - 23.419: 99.8737% ( 1) 00:13:08.759 23.609 - 23.704: 99.8811% ( 1) 00:13:08.759 24.083 - 24.178: 99.8886% ( 1) 00:13:08.759 24.178 - 24.273: 99.8960% ( 1) 00:13:08.759 3980.705 - 4004.978: 99.9480% ( 7) 00:13:08.759 4004.978 - 4029.250: 100.0000% ( 7) 00:13:08.759 00:13:08.759 Complete histogram 00:13:08.759 ================== 00:13:08.759 Range in us Cumulative Count 00:13:08.759 2.050 - 2.062: 0.3789% ( 51) 00:13:08.759 2.062 - 2.074: 19.9554% ( 2635) 00:13:08.759 2.074 - 2.086: 39.7920% ( 2670) 00:13:08.759 2.086 - 2.098: 42.2511% ( 331) 00:13:08.759 2.098 - 2.110: 56.3967% ( 1904) 00:13:08.759 2.110 - 2.121: 63.4844% ( 954) 00:13:08.759 2.121 - 2.133: 66.1738% ( 362) 00:13:08.759 2.133 - 2.145: 75.0371% ( 1193) 00:13:08.759 2.145 - 2.157: 79.2422% ( 566) 00:13:08.759 2.157 - 2.169: 81.1367% ( 255) 00:13:08.759 2.169 - 2.181: 86.3893% ( 707) 00:13:08.759 2.181 - 2.193: 88.2467% ( 250) 00:13:08.759 2.193 - 2.204: 88.9896% ( 100) 00:13:08.759 2.204 - 2.216: 90.8767% ( 254) 00:13:08.759 2.216 - 2.228: 92.8975% ( 272) 00:13:08.759 2.228 - 2.240: 94.1753% ( 172) 00:13:08.759 2.240 - 2.252: 95.1114% ( 126) 00:13:08.759 2.252 - 2.264: 95.4086% ( 40) 00:13:08.759 2.264 - 2.276: 95.4903% ( 11) 00:13:08.759 2.276 - 2.287: 95.6686% ( 24) 00:13:08.759 2.287 - 2.299: 95.9955% ( 44) 00:13:08.759 2.299 - 2.311: 96.2333% ( 32) 00:13:08.759 2.311 - 2.323: 96.3150% ( 11) 00:13:08.759 2.323 - 2.335: 96.3819% ( 9) 00:13:08.759 2.335 - 2.347: 96.4636% ( 11) 00:13:08.759 2.347 - 2.359: 96.7013% ( 32) 00:13:08.759 2.359 - 2.370: 97.1397% ( 59) 00:13:08.759 2.370 - 2.382: 97.4740% ( 45) 00:13:08.759 2.382 - 2.394: 97.7266% ( 34) 00:13:08.759 2.394 - 2.406: 97.9792% ( 34) 00:13:08.759 2.406 - 2.418: 98.1649% ( 25) 00:13:08.759 2.418 - 2.430: 98.2615% ( 13) 00:13:08.759 2.430 - 2.441: 98.3284% ( 9) 00:13:08.759 2.441 - 2.453: 98.3804% ( 7) 00:13:08.759 2.453 - 2.465: 98.4398% ( 8) 00:13:08.759 2.465 - 2.477: 98.5215% ( 11) 00:13:08.759 2.477 - 2.489: 98.5438% ( 3) 00:13:08.759 2.489 - 2.501: 98.5661% ( 3) 00:13:08.759 2.501 - 2.513: 98.5958% ( 4) 00:13:08.759 2.513 - 2.524: 98.6107% ( 2) 00:13:08.759 2.524 - 2.536: 98.6181% ( 1) 00:13:08.759 2.643 - 2.655: 98.6256% ( 1) 00:13:08.759 2.655 - 2.667: 98.6330% ( 1) 00:13:08.759 2.679 - 2.690: 98.6404% ( 1) 00:13:08.759 2.690 - 2.702: 98.6553% ( 2) 00:13:08.759 2.702 - 2.714: 98.6627% ( 1) 00:13:08.759 2.714 - 2.726: 98.6701% ( 1) 00:13:08.759 2.726 - 2.738: 98.6776% ( 1) 00:13:08.759 3.390 - 3.413: 98.6850% ( 1) 00:13:08.759 3.461 - 3.484: 98.7073% ( 3) 00:13:08.759 3.556 - 3.579: 98.7147% ( 1) 00:13:08.759 3.769 - 3.793: 98.7221% ( 1) 00:13:08.759 3.793 - 3.816: 98.7370% ( 2) 00:13:08.759 3.840 - 3.864: 98.7444% ( 1) 00:13:08.759 3.864 - 3.887: 98.7593% ( 2) 00:13:08.759 3.935 - 3.959: 98.7667% ( 1) 00:13:08.759 4.030 - 4.053: 98.7741% ( 1) 00:13:08.759 4.053 - 4.077: 98.7890% ( 2) 00:13:08.759 4.148 - 4.172: 98.8187% ( 4) 00:13:08.759 4.196 - 4.219: 98.8262% ( 1) 00:13:08.759 5.310 - 5.333: 98.8336% ( 1) 00:13:08.759 5.404 - 5.428: 98.8410% ( 1) 00:13:08.759 5.665 - 5.689: 98.8484% ( 1) 00:13:08.759 5.807 - 5.831: 98.8559% ( 1) 00:13:08.759 6.044 - 6.068: 98.8633% ( 1) 00:13:08.759 6.353 - 6.400: 98.8707% ( 1) 00:13:08.759 6.400 - 6.447: 98.8782% ( 1) 00:13:08.759 6.447 - 6.495: 98.8856% ( 1) 00:13:08.759 6.495 - 6.542: 98.8930% ( 1) 00:13:08.759 6.732 - 6.779: 98.9153% ( 3) 00:13:08.759 7.016 - 7.064: 98.9227% ( 1) 00:13:08.759 7.064 - 7.111: 98.9376% ( 2) 00:13:08.760 7.111 - 7.159: 98.9450% ( 1) 00:13:08.760 7.253 - 7.301: 98.9525% ( 1) 00:13:08.760 7.396 - 7.443: 98.9599% ( 1) 00:13:08.760 8.012 - 8.059: 98.9673% ( 1) 00:13:08.760 9.244 - 9.292: 98.9747% ( 1) 00:13:08.760 9.766 - 9.813: 98.9822% ( 1) 00:13:08.760 15.455 - 15.550: 98.9896% ( 1) 00:13:08.760 15.550 - 15.644: 98.9970% ( 1) 00:13:08.760 15.644 - 15.739: 99.0193% ( 3) 00:13:08.760 15.739 - 15.834: 99.0267% ( 1) 00:13:08.760 15.834 - 15.929: 99.0490% ( 3) 00:13:08.760 15.929 - 16.024: 99.0565% ( 1) 00:13:08.760 16.024 - 16.119: 99.0713% ( 2) 00:13:08.760 16.119 - 16.213: 9[2024-07-26 12:14:01.608845] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:08.760 9.0788% ( 1) 00:13:08.760 16.403 - 16.498: 99.1085% ( 4) 00:13:08.760 16.498 - 16.593: 99.1530% ( 6) 00:13:08.760 16.593 - 16.687: 99.1902% ( 5) 00:13:08.760 16.687 - 16.782: 99.2051% ( 2) 00:13:08.760 16.782 - 16.877: 99.2199% ( 2) 00:13:08.760 16.877 - 16.972: 99.2496% ( 4) 00:13:08.760 16.972 - 17.067: 99.2645% ( 2) 00:13:08.760 17.067 - 17.161: 99.2868% ( 3) 00:13:08.760 17.161 - 17.256: 99.2942% ( 1) 00:13:08.760 17.351 - 17.446: 99.3016% ( 1) 00:13:08.760 17.446 - 17.541: 99.3091% ( 1) 00:13:08.760 17.541 - 17.636: 99.3239% ( 2) 00:13:08.760 17.730 - 17.825: 99.3314% ( 1) 00:13:08.760 17.825 - 17.920: 99.3536% ( 3) 00:13:08.760 17.920 - 18.015: 99.3611% ( 1) 00:13:08.760 18.015 - 18.110: 99.3685% ( 1) 00:13:08.760 18.299 - 18.394: 99.3759% ( 1) 00:13:08.760 18.489 - 18.584: 99.3834% ( 1) 00:13:08.760 19.911 - 20.006: 99.3908% ( 1) 00:13:08.760 3980.705 - 4004.978: 99.7028% ( 42) 00:13:08.760 4004.978 - 4029.250: 100.0000% ( 40) 00:13:08.760 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:08.760 [ 00:13:08.760 { 00:13:08.760 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:08.760 "subtype": "Discovery", 00:13:08.760 "listen_addresses": [], 00:13:08.760 "allow_any_host": true, 00:13:08.760 "hosts": [] 00:13:08.760 }, 00:13:08.760 { 00:13:08.760 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:08.760 "subtype": "NVMe", 00:13:08.760 "listen_addresses": [ 00:13:08.760 { 00:13:08.760 "trtype": "VFIOUSER", 00:13:08.760 "adrfam": "IPv4", 00:13:08.760 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:08.760 "trsvcid": "0" 00:13:08.760 } 00:13:08.760 ], 00:13:08.760 "allow_any_host": true, 00:13:08.760 "hosts": [], 00:13:08.760 "serial_number": "SPDK1", 00:13:08.760 "model_number": "SPDK bdev Controller", 00:13:08.760 "max_namespaces": 32, 00:13:08.760 "min_cntlid": 1, 00:13:08.760 "max_cntlid": 65519, 00:13:08.760 "namespaces": [ 00:13:08.760 { 00:13:08.760 "nsid": 1, 00:13:08.760 "bdev_name": "Malloc1", 00:13:08.760 "name": "Malloc1", 00:13:08.760 "nguid": "23F04DC1B44E4EFF94A553C9C449C6B5", 00:13:08.760 "uuid": "23f04dc1-b44e-4eff-94a5-53c9c449c6b5" 00:13:08.760 }, 00:13:08.760 { 00:13:08.760 "nsid": 2, 00:13:08.760 "bdev_name": "Malloc3", 00:13:08.760 "name": "Malloc3", 00:13:08.760 "nguid": "93F7C37218CD401795BEC8533A040136", 00:13:08.760 "uuid": "93f7c372-18cd-4017-95be-c8533a040136" 00:13:08.760 } 00:13:08.760 ] 00:13:08.760 }, 00:13:08.760 { 00:13:08.760 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:08.760 "subtype": "NVMe", 00:13:08.760 "listen_addresses": [ 00:13:08.760 { 00:13:08.760 "trtype": "VFIOUSER", 00:13:08.760 "adrfam": "IPv4", 00:13:08.760 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:08.760 "trsvcid": "0" 00:13:08.760 } 00:13:08.760 ], 00:13:08.760 "allow_any_host": true, 00:13:08.760 "hosts": [], 00:13:08.760 "serial_number": "SPDK2", 00:13:08.760 "model_number": "SPDK bdev Controller", 00:13:08.760 "max_namespaces": 32, 00:13:08.760 "min_cntlid": 1, 00:13:08.760 "max_cntlid": 65519, 00:13:08.760 "namespaces": [ 00:13:08.760 { 00:13:08.760 "nsid": 1, 00:13:08.760 "bdev_name": "Malloc2", 00:13:08.760 "name": "Malloc2", 00:13:08.760 "nguid": "3FD908165BC6451AA5FA411757B933A5", 00:13:08.760 "uuid": "3fd90816-5bc6-451a-a5fa-411757b933a5" 00:13:08.760 } 00:13:08.760 ] 00:13:08.760 } 00:13:08.760 ] 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=2855983 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:13:08.760 12:14:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:13:09.018 EAL: No free 2048 kB hugepages reported on node 1 00:13:09.018 [2024-07-26 12:14:02.120523] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:13:09.018 Malloc4 00:13:09.018 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:13:09.276 [2024-07-26 12:14:02.514518] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:13:09.535 Asynchronous Event Request test 00:13:09.535 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.535 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:13:09.535 Registering asynchronous event callbacks... 00:13:09.535 Starting namespace attribute notice tests for all controllers... 00:13:09.535 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:13:09.535 aer_cb - Changed Namespace 00:13:09.535 Cleaning up... 00:13:09.535 [ 00:13:09.535 { 00:13:09.535 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:13:09.535 "subtype": "Discovery", 00:13:09.535 "listen_addresses": [], 00:13:09.535 "allow_any_host": true, 00:13:09.535 "hosts": [] 00:13:09.535 }, 00:13:09.535 { 00:13:09.535 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:13:09.535 "subtype": "NVMe", 00:13:09.535 "listen_addresses": [ 00:13:09.535 { 00:13:09.535 "trtype": "VFIOUSER", 00:13:09.535 "adrfam": "IPv4", 00:13:09.535 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:13:09.535 "trsvcid": "0" 00:13:09.535 } 00:13:09.535 ], 00:13:09.535 "allow_any_host": true, 00:13:09.535 "hosts": [], 00:13:09.535 "serial_number": "SPDK1", 00:13:09.535 "model_number": "SPDK bdev Controller", 00:13:09.535 "max_namespaces": 32, 00:13:09.535 "min_cntlid": 1, 00:13:09.535 "max_cntlid": 65519, 00:13:09.535 "namespaces": [ 00:13:09.535 { 00:13:09.535 "nsid": 1, 00:13:09.535 "bdev_name": "Malloc1", 00:13:09.535 "name": "Malloc1", 00:13:09.535 "nguid": "23F04DC1B44E4EFF94A553C9C449C6B5", 00:13:09.535 "uuid": "23f04dc1-b44e-4eff-94a5-53c9c449c6b5" 00:13:09.535 }, 00:13:09.535 { 00:13:09.535 "nsid": 2, 00:13:09.535 "bdev_name": "Malloc3", 00:13:09.535 "name": "Malloc3", 00:13:09.535 "nguid": "93F7C37218CD401795BEC8533A040136", 00:13:09.535 "uuid": "93f7c372-18cd-4017-95be-c8533a040136" 00:13:09.535 } 00:13:09.535 ] 00:13:09.535 }, 00:13:09.535 { 00:13:09.535 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:13:09.535 "subtype": "NVMe", 00:13:09.535 "listen_addresses": [ 00:13:09.535 { 00:13:09.535 "trtype": "VFIOUSER", 00:13:09.535 "adrfam": "IPv4", 00:13:09.535 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:13:09.535 "trsvcid": "0" 00:13:09.535 } 00:13:09.535 ], 00:13:09.535 "allow_any_host": true, 00:13:09.535 "hosts": [], 00:13:09.535 "serial_number": "SPDK2", 00:13:09.535 "model_number": "SPDK bdev Controller", 00:13:09.535 "max_namespaces": 32, 00:13:09.535 "min_cntlid": 1, 00:13:09.535 "max_cntlid": 65519, 00:13:09.535 "namespaces": [ 00:13:09.535 { 00:13:09.535 "nsid": 1, 00:13:09.535 "bdev_name": "Malloc2", 00:13:09.535 "name": "Malloc2", 00:13:09.535 "nguid": "3FD908165BC6451AA5FA411757B933A5", 00:13:09.535 "uuid": "3fd90816-5bc6-451a-a5fa-411757b933a5" 00:13:09.535 }, 00:13:09.535 { 00:13:09.535 "nsid": 2, 00:13:09.535 "bdev_name": "Malloc4", 00:13:09.535 "name": "Malloc4", 00:13:09.535 "nguid": "00DB71A558604158A64B3A39496DD5DE", 00:13:09.535 "uuid": "00db71a5-5860-4158-a64b-3a39496dd5de" 00:13:09.535 } 00:13:09.535 ] 00:13:09.535 } 00:13:09.535 ] 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 2855983 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2850493 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2850493 ']' 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2850493 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:13:09.535 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:09.794 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2850493 00:13:09.794 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:09.794 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:09.794 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2850493' 00:13:09.794 killing process with pid 2850493 00:13:09.794 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2850493 00:13:09.794 12:14:02 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2850493 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=2856232 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 2856232' 00:13:10.052 Process pid: 2856232 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 2856232 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@831 -- # '[' -z 2856232 ']' 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:10.052 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:10.052 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:10.052 [2024-07-26 12:14:03.250201] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:13:10.052 [2024-07-26 12:14:03.251255] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:13:10.052 [2024-07-26 12:14:03.251308] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:10.052 EAL: No free 2048 kB hugepages reported on node 1 00:13:10.312 [2024-07-26 12:14:03.313022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:10.312 [2024-07-26 12:14:03.429268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:10.312 [2024-07-26 12:14:03.429337] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:10.312 [2024-07-26 12:14:03.429355] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:10.312 [2024-07-26 12:14:03.429370] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:10.312 [2024-07-26 12:14:03.429382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:10.312 [2024-07-26 12:14:03.429479] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:10.312 [2024-07-26 12:14:03.429557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:10.312 [2024-07-26 12:14:03.429654] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:10.312 [2024-07-26 12:14:03.429656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:10.312 [2024-07-26 12:14:03.536356] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:13:10.312 [2024-07-26 12:14:03.536586] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:13:10.312 [2024-07-26 12:14:03.536898] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:13:10.312 [2024-07-26 12:14:03.537533] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:13:10.312 [2024-07-26 12:14:03.537770] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:13:10.312 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:10.312 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # return 0 00:13:10.312 12:14:03 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:13:11.689 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:13:11.689 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:13:11.689 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:13:11.689 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:11.689 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:13:11.689 12:14:04 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:11.947 Malloc1 00:13:11.947 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:13:12.206 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:13:12.464 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:13:13.031 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:13:13.032 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:13:13.032 12:14:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:13.032 Malloc2 00:13:13.032 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:13:13.290 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:13:13.548 12:14:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 2856232 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@950 -- # '[' -z 2856232 ']' 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # kill -0 2856232 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # uname 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856232 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856232' 00:13:13.807 killing process with pid 2856232 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@969 -- # kill 2856232 00:13:13.807 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@974 -- # wait 2856232 00:13:14.376 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:13:14.376 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:13:14.376 00:13:14.376 real 0m53.011s 00:13:14.376 user 3m28.983s 00:13:14.376 sys 0m4.508s 00:13:14.376 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:14.376 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:13:14.377 ************************************ 00:13:14.377 END TEST nvmf_vfio_user 00:13:14.377 ************************************ 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:14.377 ************************************ 00:13:14.377 START TEST nvmf_vfio_user_nvme_compliance 00:13:14.377 ************************************ 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:13:14.377 * Looking for test storage... 00:13:14.377 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=2856722 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 2856722' 00:13:14.377 Process pid: 2856722 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 2856722 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@831 -- # '[' -z 2856722 ']' 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:14.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:14.377 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:14.377 [2024-07-26 12:14:07.552780] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:13:14.377 [2024-07-26 12:14:07.552858] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:14.377 EAL: No free 2048 kB hugepages reported on node 1 00:13:14.377 [2024-07-26 12:14:07.610337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:14.638 [2024-07-26 12:14:07.722833] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:14.638 [2024-07-26 12:14:07.722898] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:14.638 [2024-07-26 12:14:07.722915] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:14.638 [2024-07-26 12:14:07.722929] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:14.638 [2024-07-26 12:14:07.722940] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:14.638 [2024-07-26 12:14:07.723051] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:14.638 [2024-07-26 12:14:07.723108] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:14.638 [2024-07-26 12:14:07.723127] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:14.638 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:14.638 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # return 0 00:13:14.638 12:14:07 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.016 malloc0 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:16.016 12:14:08 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:13:16.016 EAL: No free 2048 kB hugepages reported on node 1 00:13:16.016 00:13:16.016 00:13:16.016 CUnit - A unit testing framework for C - Version 2.1-3 00:13:16.016 http://cunit.sourceforge.net/ 00:13:16.016 00:13:16.016 00:13:16.016 Suite: nvme_compliance 00:13:16.016 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-26 12:14:09.084607] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.016 [2024-07-26 12:14:09.086110] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:13:16.016 [2024-07-26 12:14:09.086135] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:13:16.016 [2024-07-26 12:14:09.086149] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:13:16.016 [2024-07-26 12:14:09.087633] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.016 passed 00:13:16.016 Test: admin_identify_ctrlr_verify_fused ...[2024-07-26 12:14:09.172244] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.016 [2024-07-26 12:14:09.175271] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.016 passed 00:13:16.016 Test: admin_identify_ns ...[2024-07-26 12:14:09.261601] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.275 [2024-07-26 12:14:09.321078] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:13:16.275 [2024-07-26 12:14:09.329092] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:13:16.275 [2024-07-26 12:14:09.350223] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.275 passed 00:13:16.275 Test: admin_get_features_mandatory_features ...[2024-07-26 12:14:09.433590] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.275 [2024-07-26 12:14:09.436611] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.275 passed 00:13:16.275 Test: admin_get_features_optional_features ...[2024-07-26 12:14:09.524169] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.275 [2024-07-26 12:14:09.527188] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.534 passed 00:13:16.534 Test: admin_set_features_number_of_queues ...[2024-07-26 12:14:09.608696] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.534 [2024-07-26 12:14:09.709190] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.534 passed 00:13:16.794 Test: admin_get_log_page_mandatory_logs ...[2024-07-26 12:14:09.789859] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.794 [2024-07-26 12:14:09.794891] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.794 passed 00:13:16.794 Test: admin_get_log_page_with_lpo ...[2024-07-26 12:14:09.877618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.794 [2024-07-26 12:14:09.946087] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:13:16.794 [2024-07-26 12:14:09.959174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:16.794 passed 00:13:16.794 Test: fabric_property_get ...[2024-07-26 12:14:10.042878] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:16.794 [2024-07-26 12:14:10.044346] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:13:16.794 [2024-07-26 12:14:10.045916] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.055 passed 00:13:17.055 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-26 12:14:10.134603] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.055 [2024-07-26 12:14:10.135920] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:13:17.055 [2024-07-26 12:14:10.137625] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.055 passed 00:13:17.055 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-26 12:14:10.217616] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.055 [2024-07-26 12:14:10.305068] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:17.314 [2024-07-26 12:14:10.321084] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:17.314 [2024-07-26 12:14:10.326319] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.314 passed 00:13:17.314 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-26 12:14:10.406928] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.314 [2024-07-26 12:14:10.408253] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:13:17.314 [2024-07-26 12:14:10.411959] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.314 passed 00:13:17.314 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-26 12:14:10.492154] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.314 [2024-07-26 12:14:10.568073] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:17.574 [2024-07-26 12:14:10.592096] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:13:17.574 [2024-07-26 12:14:10.597174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.574 passed 00:13:17.574 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-26 12:14:10.680330] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.574 [2024-07-26 12:14:10.681663] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:13:17.574 [2024-07-26 12:14:10.681717] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:13:17.574 [2024-07-26 12:14:10.684374] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.574 passed 00:13:17.574 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-26 12:14:10.768618] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.833 [2024-07-26 12:14:10.860083] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:13:17.833 [2024-07-26 12:14:10.868070] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:13:17.833 [2024-07-26 12:14:10.876068] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:13:17.833 [2024-07-26 12:14:10.884086] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:13:17.833 [2024-07-26 12:14:10.913176] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.833 passed 00:13:17.833 Test: admin_create_io_sq_verify_pc ...[2024-07-26 12:14:10.996792] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:17.833 [2024-07-26 12:14:11.013086] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:13:17.833 [2024-07-26 12:14:11.030174] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:17.833 passed 00:13:18.091 Test: admin_create_io_qp_max_qps ...[2024-07-26 12:14:11.114772] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.054 [2024-07-26 12:14:12.218091] nvme_ctrlr.c:5469:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:13:19.623 [2024-07-26 12:14:12.607946] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.623 passed 00:13:19.623 Test: admin_create_io_sq_shared_cq ...[2024-07-26 12:14:12.690624] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:13:19.623 [2024-07-26 12:14:12.822086] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:13:19.623 [2024-07-26 12:14:12.859173] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:13:19.884 passed 00:13:19.884 00:13:19.884 Run Summary: Type Total Ran Passed Failed Inactive 00:13:19.884 suites 1 1 n/a 0 0 00:13:19.884 tests 18 18 18 0 0 00:13:19.884 asserts 360 360 360 0 n/a 00:13:19.884 00:13:19.884 Elapsed time = 1.566 seconds 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 2856722 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@950 -- # '[' -z 2856722 ']' 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # kill -0 2856722 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # uname 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2856722 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2856722' 00:13:19.884 killing process with pid 2856722 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@969 -- # kill 2856722 00:13:19.884 12:14:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@974 -- # wait 2856722 00:13:20.143 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:13:20.143 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:13:20.143 00:13:20.143 real 0m5.814s 00:13:20.143 user 0m16.254s 00:13:20.143 sys 0m0.541s 00:13:20.143 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:13:20.144 ************************************ 00:13:20.144 END TEST nvmf_vfio_user_nvme_compliance 00:13:20.144 ************************************ 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:20.144 ************************************ 00:13:20.144 START TEST nvmf_vfio_user_fuzz 00:13:20.144 ************************************ 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:13:20.144 * Looking for test storage... 00:13:20.144 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=2857527 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 2857527' 00:13:20.144 Process pid: 2857527 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 2857527 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@831 -- # '[' -z 2857527 ']' 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:20.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:20.144 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:20.710 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:20.710 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # return 0 00:13:20.710 12:14:13 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.645 malloc0 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.645 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:13:21.646 12:14:14 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:13:53.713 Fuzzing completed. Shutting down the fuzz application 00:13:53.713 00:13:53.713 Dumping successful admin opcodes: 00:13:53.713 8, 9, 10, 24, 00:13:53.713 Dumping successful io opcodes: 00:13:53.713 0, 00:13:53.713 NS: 0x200003a1ef00 I/O qp, Total commands completed: 578914, total successful commands: 2225, random_seed: 3265283776 00:13:53.713 NS: 0x200003a1ef00 admin qp, Total commands completed: 73754, total successful commands: 580, random_seed: 1126146304 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 2857527 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@950 -- # '[' -z 2857527 ']' 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # kill -0 2857527 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # uname 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2857527 00:13:53.713 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2857527' 00:13:53.714 killing process with pid 2857527 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@969 -- # kill 2857527 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@974 -- # wait 2857527 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:13:53.714 00:13:53.714 real 0m32.361s 00:13:53.714 user 0m31.377s 00:13:53.714 sys 0m29.127s 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1126 -- # xtrace_disable 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:13:53.714 ************************************ 00:13:53.714 END TEST nvmf_vfio_user_fuzz 00:13:53.714 ************************************ 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.714 ************************************ 00:13:53.714 START TEST nvmf_auth_target 00:13:53.714 ************************************ 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:13:53.714 * Looking for test storage... 00:13:53.714 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:13:53.714 12:14:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:13:54.653 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:13:54.653 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:13:54.653 Found net devices under 0000:0a:00.0: cvl_0_0 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:13:54.653 Found net devices under 0000:0a:00.1: cvl_0_1 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:54.653 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:54.654 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:54.654 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:13:54.654 00:13:54.654 --- 10.0.0.2 ping statistics --- 00:13:54.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.654 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:54.654 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:54.654 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.179 ms 00:13:54.654 00:13:54.654 --- 10.0.0.1 ping statistics --- 00:13:54.654 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:54.654 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2862883 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2862883 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2862883 ']' 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:54.654 12:14:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=2863042 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=b12566d979e10cd18af2906cfb5e6744ce2609fcde97336b 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.tQP 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key b12566d979e10cd18af2906cfb5e6744ce2609fcde97336b 0 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 b12566d979e10cd18af2906cfb5e6744ce2609fcde97336b 0 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=b12566d979e10cd18af2906cfb5e6744ce2609fcde97336b 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.tQP 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.tQP 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.tQP 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c93edda9ac45577d3e19331963074c5725543db2fd4718208c6bd4061db533f5 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.c4u 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c93edda9ac45577d3e19331963074c5725543db2fd4718208c6bd4061db533f5 3 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c93edda9ac45577d3e19331963074c5725543db2fd4718208c6bd4061db533f5 3 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c93edda9ac45577d3e19331963074c5725543db2fd4718208c6bd4061db533f5 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.c4u 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.c4u 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.c4u 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=c4fd7ad9abfb5f31615b79979bbe582f 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.eox 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key c4fd7ad9abfb5f31615b79979bbe582f 1 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 c4fd7ad9abfb5f31615b79979bbe582f 1 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=c4fd7ad9abfb5f31615b79979bbe582f 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:56.032 12:14:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.eox 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.eox 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.eox 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8a0cd7c943a737c0d0530bfecd74f9aaaeabd8eee3930acc 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.ebh 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8a0cd7c943a737c0d0530bfecd74f9aaaeabd8eee3930acc 2 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8a0cd7c943a737c0d0530bfecd74f9aaaeabd8eee3930acc 2 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8a0cd7c943a737c0d0530bfecd74f9aaaeabd8eee3930acc 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.ebh 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.ebh 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.ebh 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=a11abc776f1c91547b25abefa607aa8dcae2395ccf9abb8d 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.0LS 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key a11abc776f1c91547b25abefa607aa8dcae2395ccf9abb8d 2 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 a11abc776f1c91547b25abefa607aa8dcae2395ccf9abb8d 2 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=a11abc776f1c91547b25abefa607aa8dcae2395ccf9abb8d 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.0LS 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.0LS 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.0LS 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=0f8020e967f641e87cf4a0a82184a014 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.rYp 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 0f8020e967f641e87cf4a0a82184a014 1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 0f8020e967f641e87cf4a0a82184a014 1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=0f8020e967f641e87cf4a0a82184a014 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.rYp 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.rYp 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.rYp 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=9d35206b737bbba5158cc2570bae6fd91d688da8a3b3e9dd866d2fb5f61649e1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.7Td 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 9d35206b737bbba5158cc2570bae6fd91d688da8a3b3e9dd866d2fb5f61649e1 3 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 9d35206b737bbba5158cc2570bae6fd91d688da8a3b3e9dd866d2fb5f61649e1 3 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=9d35206b737bbba5158cc2570bae6fd91d688da8a3b3e9dd866d2fb5f61649e1 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.7Td 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.7Td 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.7Td 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 2862883 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2862883 ']' 00:13:56.033 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:56.034 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.034 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:56.034 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:56.034 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.034 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 2863042 /var/tmp/host.sock 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2863042 ']' 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/host.sock 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:56.292 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:13:56.292 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.tQP 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.tQP 00:13:56.550 12:14:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.tQP 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.c4u ]] 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.c4u 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.c4u 00:13:56.809 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.c4u 00:13:57.066 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:57.066 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.eox 00:13:57.066 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.067 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.067 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.067 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.eox 00:13:57.067 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.eox 00:13:57.324 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.ebh ]] 00:13:57.324 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ebh 00:13:57.324 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.324 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.325 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.325 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ebh 00:13:57.325 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.ebh 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.0LS 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.0LS 00:13:57.582 12:14:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.0LS 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.rYp ]] 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rYp 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rYp 00:13:57.840 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.rYp 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.7Td 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.7Td 00:13:58.099 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.7Td 00:13:58.357 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:13:58.357 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:13:58.357 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:13:58.357 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:13:58.357 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.357 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.615 12:14:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:13:58.873 00:13:58.873 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:13:58.873 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:13:58.873 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:13:59.131 { 00:13:59.131 "cntlid": 1, 00:13:59.131 "qid": 0, 00:13:59.131 "state": "enabled", 00:13:59.131 "thread": "nvmf_tgt_poll_group_000", 00:13:59.131 "listen_address": { 00:13:59.131 "trtype": "TCP", 00:13:59.131 "adrfam": "IPv4", 00:13:59.131 "traddr": "10.0.0.2", 00:13:59.131 "trsvcid": "4420" 00:13:59.131 }, 00:13:59.131 "peer_address": { 00:13:59.131 "trtype": "TCP", 00:13:59.131 "adrfam": "IPv4", 00:13:59.131 "traddr": "10.0.0.1", 00:13:59.131 "trsvcid": "41602" 00:13:59.131 }, 00:13:59.131 "auth": { 00:13:59.131 "state": "completed", 00:13:59.131 "digest": "sha256", 00:13:59.131 "dhgroup": "null" 00:13:59.131 } 00:13:59.131 } 00:13:59.131 ]' 00:13:59.131 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:13:59.399 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:13:59.663 12:14:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:00.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.623 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:00.913 12:14:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:01.170 00:14:01.170 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:01.170 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:01.170 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:01.428 { 00:14:01.428 "cntlid": 3, 00:14:01.428 "qid": 0, 00:14:01.428 "state": "enabled", 00:14:01.428 "thread": "nvmf_tgt_poll_group_000", 00:14:01.428 "listen_address": { 00:14:01.428 "trtype": "TCP", 00:14:01.428 "adrfam": "IPv4", 00:14:01.428 "traddr": "10.0.0.2", 00:14:01.428 "trsvcid": "4420" 00:14:01.428 }, 00:14:01.428 "peer_address": { 00:14:01.428 "trtype": "TCP", 00:14:01.428 "adrfam": "IPv4", 00:14:01.428 "traddr": "10.0.0.1", 00:14:01.428 "trsvcid": "41636" 00:14:01.428 }, 00:14:01.428 "auth": { 00:14:01.428 "state": "completed", 00:14:01.428 "digest": "sha256", 00:14:01.428 "dhgroup": "null" 00:14:01.428 } 00:14:01.428 } 00:14:01.428 ]' 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:01.428 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:01.686 12:14:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:02.618 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.618 12:14:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:02.877 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:03.443 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:03.443 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:03.443 { 00:14:03.443 "cntlid": 5, 00:14:03.443 "qid": 0, 00:14:03.443 "state": "enabled", 00:14:03.443 "thread": "nvmf_tgt_poll_group_000", 00:14:03.444 "listen_address": { 00:14:03.444 "trtype": "TCP", 00:14:03.444 "adrfam": "IPv4", 00:14:03.444 "traddr": "10.0.0.2", 00:14:03.444 "trsvcid": "4420" 00:14:03.444 }, 00:14:03.444 "peer_address": { 00:14:03.444 "trtype": "TCP", 00:14:03.444 "adrfam": "IPv4", 00:14:03.444 "traddr": "10.0.0.1", 00:14:03.444 "trsvcid": "41666" 00:14:03.444 }, 00:14:03.444 "auth": { 00:14:03.444 "state": "completed", 00:14:03.444 "digest": "sha256", 00:14:03.444 "dhgroup": "null" 00:14:03.444 } 00:14:03.444 } 00:14:03.444 ]' 00:14:03.444 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:03.702 12:14:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:03.961 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:14:04.908 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:04.908 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:04.908 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:04.908 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:04.908 12:14:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:04.908 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:04.908 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:04.908 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:04.908 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.166 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:05.424 00:14:05.424 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:05.424 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:05.424 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:05.682 { 00:14:05.682 "cntlid": 7, 00:14:05.682 "qid": 0, 00:14:05.682 "state": "enabled", 00:14:05.682 "thread": "nvmf_tgt_poll_group_000", 00:14:05.682 "listen_address": { 00:14:05.682 "trtype": "TCP", 00:14:05.682 "adrfam": "IPv4", 00:14:05.682 "traddr": "10.0.0.2", 00:14:05.682 "trsvcid": "4420" 00:14:05.682 }, 00:14:05.682 "peer_address": { 00:14:05.682 "trtype": "TCP", 00:14:05.682 "adrfam": "IPv4", 00:14:05.682 "traddr": "10.0.0.1", 00:14:05.682 "trsvcid": "41706" 00:14:05.682 }, 00:14:05.682 "auth": { 00:14:05.682 "state": "completed", 00:14:05.682 "digest": "sha256", 00:14:05.682 "dhgroup": "null" 00:14:05.682 } 00:14:05.682 } 00:14:05.682 ]' 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:05.682 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:05.940 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:05.940 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:05.940 12:14:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:06.198 12:14:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:07.132 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.132 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.390 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:07.648 00:14:07.648 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:07.648 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:07.648 12:15:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:07.907 { 00:14:07.907 "cntlid": 9, 00:14:07.907 "qid": 0, 00:14:07.907 "state": "enabled", 00:14:07.907 "thread": "nvmf_tgt_poll_group_000", 00:14:07.907 "listen_address": { 00:14:07.907 "trtype": "TCP", 00:14:07.907 "adrfam": "IPv4", 00:14:07.907 "traddr": "10.0.0.2", 00:14:07.907 "trsvcid": "4420" 00:14:07.907 }, 00:14:07.907 "peer_address": { 00:14:07.907 "trtype": "TCP", 00:14:07.907 "adrfam": "IPv4", 00:14:07.907 "traddr": "10.0.0.1", 00:14:07.907 "trsvcid": "55636" 00:14:07.907 }, 00:14:07.907 "auth": { 00:14:07.907 "state": "completed", 00:14:07.907 "digest": "sha256", 00:14:07.907 "dhgroup": "ffdhe2048" 00:14:07.907 } 00:14:07.907 } 00:14:07.907 ]' 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:07.907 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:08.165 12:15:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:09.537 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.537 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:09.795 00:14:09.795 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:09.795 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:09.795 12:15:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:10.052 { 00:14:10.052 "cntlid": 11, 00:14:10.052 "qid": 0, 00:14:10.052 "state": "enabled", 00:14:10.052 "thread": "nvmf_tgt_poll_group_000", 00:14:10.052 "listen_address": { 00:14:10.052 "trtype": "TCP", 00:14:10.052 "adrfam": "IPv4", 00:14:10.052 "traddr": "10.0.0.2", 00:14:10.052 "trsvcid": "4420" 00:14:10.052 }, 00:14:10.052 "peer_address": { 00:14:10.052 "trtype": "TCP", 00:14:10.052 "adrfam": "IPv4", 00:14:10.052 "traddr": "10.0.0.1", 00:14:10.052 "trsvcid": "55672" 00:14:10.052 }, 00:14:10.052 "auth": { 00:14:10.052 "state": "completed", 00:14:10.052 "digest": "sha256", 00:14:10.052 "dhgroup": "ffdhe2048" 00:14:10.052 } 00:14:10.052 } 00:14:10.052 ]' 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:10.052 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:10.309 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:10.309 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:10.309 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:10.309 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:10.310 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:10.567 12:15:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:11.498 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.498 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:11.757 12:15:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:12.014 00:14:12.014 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:12.014 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:12.014 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:12.272 { 00:14:12.272 "cntlid": 13, 00:14:12.272 "qid": 0, 00:14:12.272 "state": "enabled", 00:14:12.272 "thread": "nvmf_tgt_poll_group_000", 00:14:12.272 "listen_address": { 00:14:12.272 "trtype": "TCP", 00:14:12.272 "adrfam": "IPv4", 00:14:12.272 "traddr": "10.0.0.2", 00:14:12.272 "trsvcid": "4420" 00:14:12.272 }, 00:14:12.272 "peer_address": { 00:14:12.272 "trtype": "TCP", 00:14:12.272 "adrfam": "IPv4", 00:14:12.272 "traddr": "10.0.0.1", 00:14:12.272 "trsvcid": "55696" 00:14:12.272 }, 00:14:12.272 "auth": { 00:14:12.272 "state": "completed", 00:14:12.272 "digest": "sha256", 00:14:12.272 "dhgroup": "ffdhe2048" 00:14:12.272 } 00:14:12.272 } 00:14:12.272 ]' 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:12.272 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:12.529 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:12.529 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:12.529 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:12.529 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:12.529 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:12.787 12:15:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:13.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.719 12:15:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:13.977 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:14.234 00:14:14.234 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:14.234 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:14.234 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:14.492 { 00:14:14.492 "cntlid": 15, 00:14:14.492 "qid": 0, 00:14:14.492 "state": "enabled", 00:14:14.492 "thread": "nvmf_tgt_poll_group_000", 00:14:14.492 "listen_address": { 00:14:14.492 "trtype": "TCP", 00:14:14.492 "adrfam": "IPv4", 00:14:14.492 "traddr": "10.0.0.2", 00:14:14.492 "trsvcid": "4420" 00:14:14.492 }, 00:14:14.492 "peer_address": { 00:14:14.492 "trtype": "TCP", 00:14:14.492 "adrfam": "IPv4", 00:14:14.492 "traddr": "10.0.0.1", 00:14:14.492 "trsvcid": "55710" 00:14:14.492 }, 00:14:14.492 "auth": { 00:14:14.492 "state": "completed", 00:14:14.492 "digest": "sha256", 00:14:14.492 "dhgroup": "ffdhe2048" 00:14:14.492 } 00:14:14.492 } 00:14:14.492 ]' 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:14.492 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:14.749 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:14:14.749 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:14.749 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:14.749 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:14.749 12:15:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:15.007 12:15:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:15.968 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:15.968 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:15.969 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:15.969 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.226 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:16.484 00:14:16.484 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:16.484 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:16.484 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:16.741 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:16.741 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:16.741 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:16.741 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:16.741 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:16.741 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:16.741 { 00:14:16.741 "cntlid": 17, 00:14:16.741 "qid": 0, 00:14:16.741 "state": "enabled", 00:14:16.741 "thread": "nvmf_tgt_poll_group_000", 00:14:16.741 "listen_address": { 00:14:16.741 "trtype": "TCP", 00:14:16.741 "adrfam": "IPv4", 00:14:16.741 "traddr": "10.0.0.2", 00:14:16.741 "trsvcid": "4420" 00:14:16.741 }, 00:14:16.741 "peer_address": { 00:14:16.741 "trtype": "TCP", 00:14:16.741 "adrfam": "IPv4", 00:14:16.742 "traddr": "10.0.0.1", 00:14:16.742 "trsvcid": "37782" 00:14:16.742 }, 00:14:16.742 "auth": { 00:14:16.742 "state": "completed", 00:14:16.742 "digest": "sha256", 00:14:16.742 "dhgroup": "ffdhe3072" 00:14:16.742 } 00:14:16.742 } 00:14:16.742 ]' 00:14:16.742 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:16.742 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:16.742 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:16.742 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:16.742 12:15:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:17.000 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:17.000 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:17.000 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:17.258 12:15:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:18.191 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.191 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.449 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:18.707 00:14:18.707 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:18.707 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:18.707 12:15:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:18.965 { 00:14:18.965 "cntlid": 19, 00:14:18.965 "qid": 0, 00:14:18.965 "state": "enabled", 00:14:18.965 "thread": "nvmf_tgt_poll_group_000", 00:14:18.965 "listen_address": { 00:14:18.965 "trtype": "TCP", 00:14:18.965 "adrfam": "IPv4", 00:14:18.965 "traddr": "10.0.0.2", 00:14:18.965 "trsvcid": "4420" 00:14:18.965 }, 00:14:18.965 "peer_address": { 00:14:18.965 "trtype": "TCP", 00:14:18.965 "adrfam": "IPv4", 00:14:18.965 "traddr": "10.0.0.1", 00:14:18.965 "trsvcid": "37814" 00:14:18.965 }, 00:14:18.965 "auth": { 00:14:18.965 "state": "completed", 00:14:18.965 "digest": "sha256", 00:14:18.965 "dhgroup": "ffdhe3072" 00:14:18.965 } 00:14:18.965 } 00:14:18.965 ]' 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:18.965 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:19.223 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:19.223 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:19.223 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:19.223 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:19.223 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:19.481 12:15:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:20.414 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.414 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.672 12:15:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:20.930 00:14:20.930 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:20.930 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:20.930 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:21.188 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:21.188 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:21.188 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:21.188 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:21.188 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:21.188 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:21.188 { 00:14:21.188 "cntlid": 21, 00:14:21.188 "qid": 0, 00:14:21.188 "state": "enabled", 00:14:21.188 "thread": "nvmf_tgt_poll_group_000", 00:14:21.189 "listen_address": { 00:14:21.189 "trtype": "TCP", 00:14:21.189 "adrfam": "IPv4", 00:14:21.189 "traddr": "10.0.0.2", 00:14:21.189 "trsvcid": "4420" 00:14:21.189 }, 00:14:21.189 "peer_address": { 00:14:21.189 "trtype": "TCP", 00:14:21.189 "adrfam": "IPv4", 00:14:21.189 "traddr": "10.0.0.1", 00:14:21.189 "trsvcid": "37846" 00:14:21.189 }, 00:14:21.189 "auth": { 00:14:21.189 "state": "completed", 00:14:21.189 "digest": "sha256", 00:14:21.189 "dhgroup": "ffdhe3072" 00:14:21.189 } 00:14:21.189 } 00:14:21.189 ]' 00:14:21.189 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:21.189 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:21.189 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:21.189 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:21.189 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:21.447 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:21.447 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:21.447 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:21.738 12:15:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:22.672 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.672 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:22.930 12:15:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:23.189 00:14:23.189 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:23.189 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:23.189 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:23.447 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:23.448 { 00:14:23.448 "cntlid": 23, 00:14:23.448 "qid": 0, 00:14:23.448 "state": "enabled", 00:14:23.448 "thread": "nvmf_tgt_poll_group_000", 00:14:23.448 "listen_address": { 00:14:23.448 "trtype": "TCP", 00:14:23.448 "adrfam": "IPv4", 00:14:23.448 "traddr": "10.0.0.2", 00:14:23.448 "trsvcid": "4420" 00:14:23.448 }, 00:14:23.448 "peer_address": { 00:14:23.448 "trtype": "TCP", 00:14:23.448 "adrfam": "IPv4", 00:14:23.448 "traddr": "10.0.0.1", 00:14:23.448 "trsvcid": "37866" 00:14:23.448 }, 00:14:23.448 "auth": { 00:14:23.448 "state": "completed", 00:14:23.448 "digest": "sha256", 00:14:23.448 "dhgroup": "ffdhe3072" 00:14:23.448 } 00:14:23.448 } 00:14:23.448 ]' 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:23.448 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:23.706 12:15:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:14:24.639 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:24.897 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:24.897 12:15:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.156 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:25.414 00:14:25.414 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:25.414 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:25.414 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:25.672 { 00:14:25.672 "cntlid": 25, 00:14:25.672 "qid": 0, 00:14:25.672 "state": "enabled", 00:14:25.672 "thread": "nvmf_tgt_poll_group_000", 00:14:25.672 "listen_address": { 00:14:25.672 "trtype": "TCP", 00:14:25.672 "adrfam": "IPv4", 00:14:25.672 "traddr": "10.0.0.2", 00:14:25.672 "trsvcid": "4420" 00:14:25.672 }, 00:14:25.672 "peer_address": { 00:14:25.672 "trtype": "TCP", 00:14:25.672 "adrfam": "IPv4", 00:14:25.672 "traddr": "10.0.0.1", 00:14:25.672 "trsvcid": "37892" 00:14:25.672 }, 00:14:25.672 "auth": { 00:14:25.672 "state": "completed", 00:14:25.672 "digest": "sha256", 00:14:25.672 "dhgroup": "ffdhe4096" 00:14:25.672 } 00:14:25.672 } 00:14:25.672 ]' 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:25.672 12:15:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:25.931 12:15:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:27.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.303 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:27.561 00:14:27.561 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:27.561 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:27.561 12:15:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:27.819 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:27.819 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:27.819 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:27.819 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:27.819 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:27.819 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:27.819 { 00:14:27.819 "cntlid": 27, 00:14:27.819 "qid": 0, 00:14:27.819 "state": "enabled", 00:14:27.819 "thread": "nvmf_tgt_poll_group_000", 00:14:27.819 "listen_address": { 00:14:27.820 "trtype": "TCP", 00:14:27.820 "adrfam": "IPv4", 00:14:27.820 "traddr": "10.0.0.2", 00:14:27.820 "trsvcid": "4420" 00:14:27.820 }, 00:14:27.820 "peer_address": { 00:14:27.820 "trtype": "TCP", 00:14:27.820 "adrfam": "IPv4", 00:14:27.820 "traddr": "10.0.0.1", 00:14:27.820 "trsvcid": "54004" 00:14:27.820 }, 00:14:27.820 "auth": { 00:14:27.820 "state": "completed", 00:14:27.820 "digest": "sha256", 00:14:27.820 "dhgroup": "ffdhe4096" 00:14:27.820 } 00:14:27.820 } 00:14:27.820 ]' 00:14:27.820 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:28.078 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:28.336 12:15:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:29.268 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.268 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.526 12:15:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:29.783 00:14:30.041 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:30.041 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:30.041 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:30.298 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:30.298 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:30.298 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:30.298 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:30.298 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:30.298 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:30.298 { 00:14:30.298 "cntlid": 29, 00:14:30.298 "qid": 0, 00:14:30.298 "state": "enabled", 00:14:30.299 "thread": "nvmf_tgt_poll_group_000", 00:14:30.299 "listen_address": { 00:14:30.299 "trtype": "TCP", 00:14:30.299 "adrfam": "IPv4", 00:14:30.299 "traddr": "10.0.0.2", 00:14:30.299 "trsvcid": "4420" 00:14:30.299 }, 00:14:30.299 "peer_address": { 00:14:30.299 "trtype": "TCP", 00:14:30.299 "adrfam": "IPv4", 00:14:30.299 "traddr": "10.0.0.1", 00:14:30.299 "trsvcid": "54026" 00:14:30.299 }, 00:14:30.299 "auth": { 00:14:30.299 "state": "completed", 00:14:30.299 "digest": "sha256", 00:14:30.299 "dhgroup": "ffdhe4096" 00:14:30.299 } 00:14:30.299 } 00:14:30.299 ]' 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:30.299 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:30.555 12:15:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:31.530 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:31.530 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:31.788 12:15:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:32.357 00:14:32.357 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:32.358 { 00:14:32.358 "cntlid": 31, 00:14:32.358 "qid": 0, 00:14:32.358 "state": "enabled", 00:14:32.358 "thread": "nvmf_tgt_poll_group_000", 00:14:32.358 "listen_address": { 00:14:32.358 "trtype": "TCP", 00:14:32.358 "adrfam": "IPv4", 00:14:32.358 "traddr": "10.0.0.2", 00:14:32.358 "trsvcid": "4420" 00:14:32.358 }, 00:14:32.358 "peer_address": { 00:14:32.358 "trtype": "TCP", 00:14:32.358 "adrfam": "IPv4", 00:14:32.358 "traddr": "10.0.0.1", 00:14:32.358 "trsvcid": "54046" 00:14:32.358 }, 00:14:32.358 "auth": { 00:14:32.358 "state": "completed", 00:14:32.358 "digest": "sha256", 00:14:32.358 "dhgroup": "ffdhe4096" 00:14:32.358 } 00:14:32.358 } 00:14:32.358 ]' 00:14:32.358 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:32.616 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:32.874 12:15:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:33.813 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:33.813 12:15:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.071 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.072 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.072 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.072 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.072 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:34.640 00:14:34.640 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:34.640 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:34.640 12:15:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:34.898 { 00:14:34.898 "cntlid": 33, 00:14:34.898 "qid": 0, 00:14:34.898 "state": "enabled", 00:14:34.898 "thread": "nvmf_tgt_poll_group_000", 00:14:34.898 "listen_address": { 00:14:34.898 "trtype": "TCP", 00:14:34.898 "adrfam": "IPv4", 00:14:34.898 "traddr": "10.0.0.2", 00:14:34.898 "trsvcid": "4420" 00:14:34.898 }, 00:14:34.898 "peer_address": { 00:14:34.898 "trtype": "TCP", 00:14:34.898 "adrfam": "IPv4", 00:14:34.898 "traddr": "10.0.0.1", 00:14:34.898 "trsvcid": "54072" 00:14:34.898 }, 00:14:34.898 "auth": { 00:14:34.898 "state": "completed", 00:14:34.898 "digest": "sha256", 00:14:34.898 "dhgroup": "ffdhe6144" 00:14:34.898 } 00:14:34.898 } 00:14:34.898 ]' 00:14:34.898 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:35.156 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:35.415 12:15:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:36.353 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.353 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:36.611 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:36.612 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:36.612 12:15:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:37.178 00:14:37.178 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:37.178 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:37.178 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:37.437 { 00:14:37.437 "cntlid": 35, 00:14:37.437 "qid": 0, 00:14:37.437 "state": "enabled", 00:14:37.437 "thread": "nvmf_tgt_poll_group_000", 00:14:37.437 "listen_address": { 00:14:37.437 "trtype": "TCP", 00:14:37.437 "adrfam": "IPv4", 00:14:37.437 "traddr": "10.0.0.2", 00:14:37.437 "trsvcid": "4420" 00:14:37.437 }, 00:14:37.437 "peer_address": { 00:14:37.437 "trtype": "TCP", 00:14:37.437 "adrfam": "IPv4", 00:14:37.437 "traddr": "10.0.0.1", 00:14:37.437 "trsvcid": "57364" 00:14:37.437 }, 00:14:37.437 "auth": { 00:14:37.437 "state": "completed", 00:14:37.437 "digest": "sha256", 00:14:37.437 "dhgroup": "ffdhe6144" 00:14:37.437 } 00:14:37.437 } 00:14:37.437 ]' 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:37.437 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:37.697 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:37.697 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:37.697 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:37.956 12:15:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:38.893 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:38.893 12:15:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.150 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:39.722 00:14:39.722 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:39.722 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:39.722 12:15:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:39.979 { 00:14:39.979 "cntlid": 37, 00:14:39.979 "qid": 0, 00:14:39.979 "state": "enabled", 00:14:39.979 "thread": "nvmf_tgt_poll_group_000", 00:14:39.979 "listen_address": { 00:14:39.979 "trtype": "TCP", 00:14:39.979 "adrfam": "IPv4", 00:14:39.979 "traddr": "10.0.0.2", 00:14:39.979 "trsvcid": "4420" 00:14:39.979 }, 00:14:39.979 "peer_address": { 00:14:39.979 "trtype": "TCP", 00:14:39.979 "adrfam": "IPv4", 00:14:39.979 "traddr": "10.0.0.1", 00:14:39.979 "trsvcid": "57400" 00:14:39.979 }, 00:14:39.979 "auth": { 00:14:39.979 "state": "completed", 00:14:39.979 "digest": "sha256", 00:14:39.979 "dhgroup": "ffdhe6144" 00:14:39.979 } 00:14:39.979 } 00:14:39.979 ]' 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:39.979 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:40.237 12:15:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:41.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.172 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:41.431 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:41.690 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:41.690 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:41.690 12:15:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:42.256 00:14:42.256 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:42.256 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:42.256 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:42.257 { 00:14:42.257 "cntlid": 39, 00:14:42.257 "qid": 0, 00:14:42.257 "state": "enabled", 00:14:42.257 "thread": "nvmf_tgt_poll_group_000", 00:14:42.257 "listen_address": { 00:14:42.257 "trtype": "TCP", 00:14:42.257 "adrfam": "IPv4", 00:14:42.257 "traddr": "10.0.0.2", 00:14:42.257 "trsvcid": "4420" 00:14:42.257 }, 00:14:42.257 "peer_address": { 00:14:42.257 "trtype": "TCP", 00:14:42.257 "adrfam": "IPv4", 00:14:42.257 "traddr": "10.0.0.1", 00:14:42.257 "trsvcid": "57442" 00:14:42.257 }, 00:14:42.257 "auth": { 00:14:42.257 "state": "completed", 00:14:42.257 "digest": "sha256", 00:14:42.257 "dhgroup": "ffdhe6144" 00:14:42.257 } 00:14:42.257 } 00:14:42.257 ]' 00:14:42.257 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:42.514 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:42.771 12:15:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:43.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.707 12:15:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:43.965 12:15:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:44.902 00:14:44.903 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:44.903 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:44.903 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:45.160 { 00:14:45.160 "cntlid": 41, 00:14:45.160 "qid": 0, 00:14:45.160 "state": "enabled", 00:14:45.160 "thread": "nvmf_tgt_poll_group_000", 00:14:45.160 "listen_address": { 00:14:45.160 "trtype": "TCP", 00:14:45.160 "adrfam": "IPv4", 00:14:45.160 "traddr": "10.0.0.2", 00:14:45.160 "trsvcid": "4420" 00:14:45.160 }, 00:14:45.160 "peer_address": { 00:14:45.160 "trtype": "TCP", 00:14:45.160 "adrfam": "IPv4", 00:14:45.160 "traddr": "10.0.0.1", 00:14:45.160 "trsvcid": "57478" 00:14:45.160 }, 00:14:45.160 "auth": { 00:14:45.160 "state": "completed", 00:14:45.160 "digest": "sha256", 00:14:45.160 "dhgroup": "ffdhe8192" 00:14:45.160 } 00:14:45.160 } 00:14:45.160 ]' 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:45.160 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:45.418 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:45.418 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:45.418 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:45.676 12:15:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:46.646 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.646 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:46.904 12:15:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:46.904 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:47.842 00:14:47.842 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:47.842 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:47.842 12:15:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:48.101 { 00:14:48.101 "cntlid": 43, 00:14:48.101 "qid": 0, 00:14:48.101 "state": "enabled", 00:14:48.101 "thread": "nvmf_tgt_poll_group_000", 00:14:48.101 "listen_address": { 00:14:48.101 "trtype": "TCP", 00:14:48.101 "adrfam": "IPv4", 00:14:48.101 "traddr": "10.0.0.2", 00:14:48.101 "trsvcid": "4420" 00:14:48.101 }, 00:14:48.101 "peer_address": { 00:14:48.101 "trtype": "TCP", 00:14:48.101 "adrfam": "IPv4", 00:14:48.101 "traddr": "10.0.0.1", 00:14:48.101 "trsvcid": "57702" 00:14:48.101 }, 00:14:48.101 "auth": { 00:14:48.101 "state": "completed", 00:14:48.101 "digest": "sha256", 00:14:48.101 "dhgroup": "ffdhe8192" 00:14:48.101 } 00:14:48.101 } 00:14:48.101 ]' 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:48.101 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:48.359 12:15:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:49.295 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.295 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:49.554 12:15:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:50.546 00:14:50.546 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:50.546 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:50.546 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:50.804 { 00:14:50.804 "cntlid": 45, 00:14:50.804 "qid": 0, 00:14:50.804 "state": "enabled", 00:14:50.804 "thread": "nvmf_tgt_poll_group_000", 00:14:50.804 "listen_address": { 00:14:50.804 "trtype": "TCP", 00:14:50.804 "adrfam": "IPv4", 00:14:50.804 "traddr": "10.0.0.2", 00:14:50.804 "trsvcid": "4420" 00:14:50.804 }, 00:14:50.804 "peer_address": { 00:14:50.804 "trtype": "TCP", 00:14:50.804 "adrfam": "IPv4", 00:14:50.804 "traddr": "10.0.0.1", 00:14:50.804 "trsvcid": "57724" 00:14:50.804 }, 00:14:50.804 "auth": { 00:14:50.804 "state": "completed", 00:14:50.804 "digest": "sha256", 00:14:50.804 "dhgroup": "ffdhe8192" 00:14:50.804 } 00:14:50.804 } 00:14:50.804 ]' 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:50.804 12:15:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:50.804 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:50.804 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:50.804 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:50.804 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:50.804 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:51.063 12:15:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:52.442 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:52.442 12:15:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:14:53.379 00:14:53.379 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:53.379 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:53.379 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:53.637 { 00:14:53.637 "cntlid": 47, 00:14:53.637 "qid": 0, 00:14:53.637 "state": "enabled", 00:14:53.637 "thread": "nvmf_tgt_poll_group_000", 00:14:53.637 "listen_address": { 00:14:53.637 "trtype": "TCP", 00:14:53.637 "adrfam": "IPv4", 00:14:53.637 "traddr": "10.0.0.2", 00:14:53.637 "trsvcid": "4420" 00:14:53.637 }, 00:14:53.637 "peer_address": { 00:14:53.637 "trtype": "TCP", 00:14:53.637 "adrfam": "IPv4", 00:14:53.637 "traddr": "10.0.0.1", 00:14:53.637 "trsvcid": "57748" 00:14:53.637 }, 00:14:53.637 "auth": { 00:14:53.637 "state": "completed", 00:14:53.637 "digest": "sha256", 00:14:53.637 "dhgroup": "ffdhe8192" 00:14:53.637 } 00:14:53.637 } 00:14:53.637 ]' 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:53.637 12:15:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:53.895 12:15:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:54.833 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:54.833 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.399 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:14:55.659 00:14:55.659 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:55.659 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:55.659 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:55.917 { 00:14:55.917 "cntlid": 49, 00:14:55.917 "qid": 0, 00:14:55.917 "state": "enabled", 00:14:55.917 "thread": "nvmf_tgt_poll_group_000", 00:14:55.917 "listen_address": { 00:14:55.917 "trtype": "TCP", 00:14:55.917 "adrfam": "IPv4", 00:14:55.917 "traddr": "10.0.0.2", 00:14:55.917 "trsvcid": "4420" 00:14:55.917 }, 00:14:55.917 "peer_address": { 00:14:55.917 "trtype": "TCP", 00:14:55.917 "adrfam": "IPv4", 00:14:55.917 "traddr": "10.0.0.1", 00:14:55.917 "trsvcid": "57774" 00:14:55.917 }, 00:14:55.917 "auth": { 00:14:55.917 "state": "completed", 00:14:55.917 "digest": "sha384", 00:14:55.917 "dhgroup": "null" 00:14:55.917 } 00:14:55.917 } 00:14:55.917 ]' 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:55.917 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:55.918 12:15:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:55.918 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:55.918 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:55.918 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:55.918 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:55.918 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:56.176 12:15:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:57.114 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.114 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.372 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:14:57.630 00:14:57.630 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:57.630 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:57.630 12:15:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:14:57.888 { 00:14:57.888 "cntlid": 51, 00:14:57.888 "qid": 0, 00:14:57.888 "state": "enabled", 00:14:57.888 "thread": "nvmf_tgt_poll_group_000", 00:14:57.888 "listen_address": { 00:14:57.888 "trtype": "TCP", 00:14:57.888 "adrfam": "IPv4", 00:14:57.888 "traddr": "10.0.0.2", 00:14:57.888 "trsvcid": "4420" 00:14:57.888 }, 00:14:57.888 "peer_address": { 00:14:57.888 "trtype": "TCP", 00:14:57.888 "adrfam": "IPv4", 00:14:57.888 "traddr": "10.0.0.1", 00:14:57.888 "trsvcid": "39708" 00:14:57.888 }, 00:14:57.888 "auth": { 00:14:57.888 "state": "completed", 00:14:57.888 "digest": "sha384", 00:14:57.888 "dhgroup": "null" 00:14:57.888 } 00:14:57.888 } 00:14:57.888 ]' 00:14:57.888 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:14:58.146 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:14:58.405 12:15:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:14:59.342 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:14:59.342 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.343 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.601 12:15:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:14:59.859 00:14:59.860 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:14:59.860 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:14:59.860 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:00.134 { 00:15:00.134 "cntlid": 53, 00:15:00.134 "qid": 0, 00:15:00.134 "state": "enabled", 00:15:00.134 "thread": "nvmf_tgt_poll_group_000", 00:15:00.134 "listen_address": { 00:15:00.134 "trtype": "TCP", 00:15:00.134 "adrfam": "IPv4", 00:15:00.134 "traddr": "10.0.0.2", 00:15:00.134 "trsvcid": "4420" 00:15:00.134 }, 00:15:00.134 "peer_address": { 00:15:00.134 "trtype": "TCP", 00:15:00.134 "adrfam": "IPv4", 00:15:00.134 "traddr": "10.0.0.1", 00:15:00.134 "trsvcid": "39738" 00:15:00.134 }, 00:15:00.134 "auth": { 00:15:00.134 "state": "completed", 00:15:00.134 "digest": "sha384", 00:15:00.134 "dhgroup": "null" 00:15:00.134 } 00:15:00.134 } 00:15:00.134 ]' 00:15:00.134 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:00.399 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:00.656 12:15:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:01.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.591 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:01.851 12:15:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:02.112 00:15:02.112 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:02.112 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:02.112 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:02.371 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:02.371 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:02.371 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:02.371 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:02.630 { 00:15:02.630 "cntlid": 55, 00:15:02.630 "qid": 0, 00:15:02.630 "state": "enabled", 00:15:02.630 "thread": "nvmf_tgt_poll_group_000", 00:15:02.630 "listen_address": { 00:15:02.630 "trtype": "TCP", 00:15:02.630 "adrfam": "IPv4", 00:15:02.630 "traddr": "10.0.0.2", 00:15:02.630 "trsvcid": "4420" 00:15:02.630 }, 00:15:02.630 "peer_address": { 00:15:02.630 "trtype": "TCP", 00:15:02.630 "adrfam": "IPv4", 00:15:02.630 "traddr": "10.0.0.1", 00:15:02.630 "trsvcid": "39764" 00:15:02.630 }, 00:15:02.630 "auth": { 00:15:02.630 "state": "completed", 00:15:02.630 "digest": "sha384", 00:15:02.630 "dhgroup": "null" 00:15:02.630 } 00:15:02.630 } 00:15:02.630 ]' 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:02.630 12:15:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:02.888 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:03.824 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:03.824 12:15:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.082 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:04.649 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:04.649 { 00:15:04.649 "cntlid": 57, 00:15:04.649 "qid": 0, 00:15:04.649 "state": "enabled", 00:15:04.649 "thread": "nvmf_tgt_poll_group_000", 00:15:04.649 "listen_address": { 00:15:04.649 "trtype": "TCP", 00:15:04.649 "adrfam": "IPv4", 00:15:04.649 "traddr": "10.0.0.2", 00:15:04.649 "trsvcid": "4420" 00:15:04.649 }, 00:15:04.649 "peer_address": { 00:15:04.649 "trtype": "TCP", 00:15:04.649 "adrfam": "IPv4", 00:15:04.649 "traddr": "10.0.0.1", 00:15:04.649 "trsvcid": "39792" 00:15:04.649 }, 00:15:04.649 "auth": { 00:15:04.649 "state": "completed", 00:15:04.649 "digest": "sha384", 00:15:04.649 "dhgroup": "ffdhe2048" 00:15:04.649 } 00:15:04.649 } 00:15:04.649 ]' 00:15:04.649 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:04.907 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:04.907 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:04.907 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:04.907 12:15:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:04.907 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:04.907 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:04.907 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:05.165 12:15:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:06.099 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.099 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.357 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:06.615 00:15:06.874 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:06.874 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:06.874 12:15:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:06.874 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:06.874 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:06.874 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:06.874 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:07.133 { 00:15:07.133 "cntlid": 59, 00:15:07.133 "qid": 0, 00:15:07.133 "state": "enabled", 00:15:07.133 "thread": "nvmf_tgt_poll_group_000", 00:15:07.133 "listen_address": { 00:15:07.133 "trtype": "TCP", 00:15:07.133 "adrfam": "IPv4", 00:15:07.133 "traddr": "10.0.0.2", 00:15:07.133 "trsvcid": "4420" 00:15:07.133 }, 00:15:07.133 "peer_address": { 00:15:07.133 "trtype": "TCP", 00:15:07.133 "adrfam": "IPv4", 00:15:07.133 "traddr": "10.0.0.1", 00:15:07.133 "trsvcid": "34320" 00:15:07.133 }, 00:15:07.133 "auth": { 00:15:07.133 "state": "completed", 00:15:07.133 "digest": "sha384", 00:15:07.133 "dhgroup": "ffdhe2048" 00:15:07.133 } 00:15:07.133 } 00:15:07.133 ]' 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:07.133 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:07.389 12:16:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:15:08.326 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:08.326 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:08.326 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:08.326 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.327 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.327 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.327 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:08.327 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.327 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.584 12:16:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:08.842 00:15:08.842 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:08.842 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:08.842 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:09.100 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:09.100 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:09.100 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:09.100 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:09.357 { 00:15:09.357 "cntlid": 61, 00:15:09.357 "qid": 0, 00:15:09.357 "state": "enabled", 00:15:09.357 "thread": "nvmf_tgt_poll_group_000", 00:15:09.357 "listen_address": { 00:15:09.357 "trtype": "TCP", 00:15:09.357 "adrfam": "IPv4", 00:15:09.357 "traddr": "10.0.0.2", 00:15:09.357 "trsvcid": "4420" 00:15:09.357 }, 00:15:09.357 "peer_address": { 00:15:09.357 "trtype": "TCP", 00:15:09.357 "adrfam": "IPv4", 00:15:09.357 "traddr": "10.0.0.1", 00:15:09.357 "trsvcid": "34346" 00:15:09.357 }, 00:15:09.357 "auth": { 00:15:09.357 "state": "completed", 00:15:09.357 "digest": "sha384", 00:15:09.357 "dhgroup": "ffdhe2048" 00:15:09.357 } 00:15:09.357 } 00:15:09.357 ]' 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:09.357 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:09.615 12:16:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:10.558 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.558 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:10.816 12:16:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:11.073 00:15:11.073 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:11.073 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:11.073 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:11.331 { 00:15:11.331 "cntlid": 63, 00:15:11.331 "qid": 0, 00:15:11.331 "state": "enabled", 00:15:11.331 "thread": "nvmf_tgt_poll_group_000", 00:15:11.331 "listen_address": { 00:15:11.331 "trtype": "TCP", 00:15:11.331 "adrfam": "IPv4", 00:15:11.331 "traddr": "10.0.0.2", 00:15:11.331 "trsvcid": "4420" 00:15:11.331 }, 00:15:11.331 "peer_address": { 00:15:11.331 "trtype": "TCP", 00:15:11.331 "adrfam": "IPv4", 00:15:11.331 "traddr": "10.0.0.1", 00:15:11.331 "trsvcid": "34364" 00:15:11.331 }, 00:15:11.331 "auth": { 00:15:11.331 "state": "completed", 00:15:11.331 "digest": "sha384", 00:15:11.331 "dhgroup": "ffdhe2048" 00:15:11.331 } 00:15:11.331 } 00:15:11.331 ]' 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:11.331 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:11.588 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:11.588 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:11.588 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:11.588 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:11.588 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:11.588 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:11.846 12:16:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:12.778 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:12.778 12:16:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:13.035 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:15:13.035 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:13.035 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.036 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:13.293 00:15:13.293 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:13.293 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:13.293 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:13.551 { 00:15:13.551 "cntlid": 65, 00:15:13.551 "qid": 0, 00:15:13.551 "state": "enabled", 00:15:13.551 "thread": "nvmf_tgt_poll_group_000", 00:15:13.551 "listen_address": { 00:15:13.551 "trtype": "TCP", 00:15:13.551 "adrfam": "IPv4", 00:15:13.551 "traddr": "10.0.0.2", 00:15:13.551 "trsvcid": "4420" 00:15:13.551 }, 00:15:13.551 "peer_address": { 00:15:13.551 "trtype": "TCP", 00:15:13.551 "adrfam": "IPv4", 00:15:13.551 "traddr": "10.0.0.1", 00:15:13.551 "trsvcid": "34408" 00:15:13.551 }, 00:15:13.551 "auth": { 00:15:13.551 "state": "completed", 00:15:13.551 "digest": "sha384", 00:15:13.551 "dhgroup": "ffdhe3072" 00:15:13.551 } 00:15:13.551 } 00:15:13.551 ]' 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:13.551 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:13.809 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:13.809 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:13.809 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:13.809 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:13.809 12:16:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:14.066 12:16:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:14.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:14.999 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.257 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:15.823 00:15:15.823 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:15.823 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:15.823 12:16:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:16.081 { 00:15:16.081 "cntlid": 67, 00:15:16.081 "qid": 0, 00:15:16.081 "state": "enabled", 00:15:16.081 "thread": "nvmf_tgt_poll_group_000", 00:15:16.081 "listen_address": { 00:15:16.081 "trtype": "TCP", 00:15:16.081 "adrfam": "IPv4", 00:15:16.081 "traddr": "10.0.0.2", 00:15:16.081 "trsvcid": "4420" 00:15:16.081 }, 00:15:16.081 "peer_address": { 00:15:16.081 "trtype": "TCP", 00:15:16.081 "adrfam": "IPv4", 00:15:16.081 "traddr": "10.0.0.1", 00:15:16.081 "trsvcid": "34438" 00:15:16.081 }, 00:15:16.081 "auth": { 00:15:16.081 "state": "completed", 00:15:16.081 "digest": "sha384", 00:15:16.081 "dhgroup": "ffdhe3072" 00:15:16.081 } 00:15:16.081 } 00:15:16.081 ]' 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:16.081 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:16.339 12:16:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:17.323 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.323 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.581 12:16:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:17.840 00:15:17.840 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:17.840 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:17.840 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:18.098 { 00:15:18.098 "cntlid": 69, 00:15:18.098 "qid": 0, 00:15:18.098 "state": "enabled", 00:15:18.098 "thread": "nvmf_tgt_poll_group_000", 00:15:18.098 "listen_address": { 00:15:18.098 "trtype": "TCP", 00:15:18.098 "adrfam": "IPv4", 00:15:18.098 "traddr": "10.0.0.2", 00:15:18.098 "trsvcid": "4420" 00:15:18.098 }, 00:15:18.098 "peer_address": { 00:15:18.098 "trtype": "TCP", 00:15:18.098 "adrfam": "IPv4", 00:15:18.098 "traddr": "10.0.0.1", 00:15:18.098 "trsvcid": "42164" 00:15:18.098 }, 00:15:18.098 "auth": { 00:15:18.098 "state": "completed", 00:15:18.098 "digest": "sha384", 00:15:18.098 "dhgroup": "ffdhe3072" 00:15:18.098 } 00:15:18.098 } 00:15:18.098 ]' 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:18.098 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:18.356 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:18.356 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:18.356 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:18.356 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:18.356 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:18.614 12:16:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:19.547 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.547 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:19.805 12:16:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:20.063 00:15:20.063 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:20.063 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:20.063 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:20.320 { 00:15:20.320 "cntlid": 71, 00:15:20.320 "qid": 0, 00:15:20.320 "state": "enabled", 00:15:20.320 "thread": "nvmf_tgt_poll_group_000", 00:15:20.320 "listen_address": { 00:15:20.320 "trtype": "TCP", 00:15:20.320 "adrfam": "IPv4", 00:15:20.320 "traddr": "10.0.0.2", 00:15:20.320 "trsvcid": "4420" 00:15:20.320 }, 00:15:20.320 "peer_address": { 00:15:20.320 "trtype": "TCP", 00:15:20.320 "adrfam": "IPv4", 00:15:20.320 "traddr": "10.0.0.1", 00:15:20.320 "trsvcid": "42184" 00:15:20.320 }, 00:15:20.320 "auth": { 00:15:20.320 "state": "completed", 00:15:20.320 "digest": "sha384", 00:15:20.320 "dhgroup": "ffdhe3072" 00:15:20.320 } 00:15:20.320 } 00:15:20.320 ]' 00:15:20.320 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:20.577 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:20.835 12:16:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:21.767 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:21.767 12:16:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.025 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:22.591 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:22.591 { 00:15:22.591 "cntlid": 73, 00:15:22.591 "qid": 0, 00:15:22.591 "state": "enabled", 00:15:22.591 "thread": "nvmf_tgt_poll_group_000", 00:15:22.591 "listen_address": { 00:15:22.591 "trtype": "TCP", 00:15:22.591 "adrfam": "IPv4", 00:15:22.591 "traddr": "10.0.0.2", 00:15:22.591 "trsvcid": "4420" 00:15:22.591 }, 00:15:22.591 "peer_address": { 00:15:22.591 "trtype": "TCP", 00:15:22.591 "adrfam": "IPv4", 00:15:22.591 "traddr": "10.0.0.1", 00:15:22.591 "trsvcid": "42228" 00:15:22.591 }, 00:15:22.591 "auth": { 00:15:22.591 "state": "completed", 00:15:22.591 "digest": "sha384", 00:15:22.591 "dhgroup": "ffdhe4096" 00:15:22.591 } 00:15:22.591 } 00:15:22.591 ]' 00:15:22.591 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:22.849 12:16:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:23.106 12:16:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:24.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.039 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.297 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:24.861 00:15:24.861 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:24.861 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:24.861 12:16:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:25.120 { 00:15:25.120 "cntlid": 75, 00:15:25.120 "qid": 0, 00:15:25.120 "state": "enabled", 00:15:25.120 "thread": "nvmf_tgt_poll_group_000", 00:15:25.120 "listen_address": { 00:15:25.120 "trtype": "TCP", 00:15:25.120 "adrfam": "IPv4", 00:15:25.120 "traddr": "10.0.0.2", 00:15:25.120 "trsvcid": "4420" 00:15:25.120 }, 00:15:25.120 "peer_address": { 00:15:25.120 "trtype": "TCP", 00:15:25.120 "adrfam": "IPv4", 00:15:25.120 "traddr": "10.0.0.1", 00:15:25.120 "trsvcid": "42258" 00:15:25.120 }, 00:15:25.120 "auth": { 00:15:25.120 "state": "completed", 00:15:25.120 "digest": "sha384", 00:15:25.120 "dhgroup": "ffdhe4096" 00:15:25.120 } 00:15:25.120 } 00:15:25.120 ]' 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:25.120 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:25.377 12:16:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:26.310 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.310 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:26.876 12:16:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:27.134 00:15:27.134 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:27.134 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:27.134 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:27.392 { 00:15:27.392 "cntlid": 77, 00:15:27.392 "qid": 0, 00:15:27.392 "state": "enabled", 00:15:27.392 "thread": "nvmf_tgt_poll_group_000", 00:15:27.392 "listen_address": { 00:15:27.392 "trtype": "TCP", 00:15:27.392 "adrfam": "IPv4", 00:15:27.392 "traddr": "10.0.0.2", 00:15:27.392 "trsvcid": "4420" 00:15:27.392 }, 00:15:27.392 "peer_address": { 00:15:27.392 "trtype": "TCP", 00:15:27.392 "adrfam": "IPv4", 00:15:27.392 "traddr": "10.0.0.1", 00:15:27.392 "trsvcid": "59346" 00:15:27.392 }, 00:15:27.392 "auth": { 00:15:27.392 "state": "completed", 00:15:27.392 "digest": "sha384", 00:15:27.392 "dhgroup": "ffdhe4096" 00:15:27.392 } 00:15:27.392 } 00:15:27.392 ]' 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:27.392 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:27.650 12:16:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:29.025 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.025 12:16:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.025 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:29.282 00:15:29.539 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:29.539 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:29.540 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:29.540 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:29.540 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:29.540 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:29.540 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:29.797 { 00:15:29.797 "cntlid": 79, 00:15:29.797 "qid": 0, 00:15:29.797 "state": "enabled", 00:15:29.797 "thread": "nvmf_tgt_poll_group_000", 00:15:29.797 "listen_address": { 00:15:29.797 "trtype": "TCP", 00:15:29.797 "adrfam": "IPv4", 00:15:29.797 "traddr": "10.0.0.2", 00:15:29.797 "trsvcid": "4420" 00:15:29.797 }, 00:15:29.797 "peer_address": { 00:15:29.797 "trtype": "TCP", 00:15:29.797 "adrfam": "IPv4", 00:15:29.797 "traddr": "10.0.0.1", 00:15:29.797 "trsvcid": "59374" 00:15:29.797 }, 00:15:29.797 "auth": { 00:15:29.797 "state": "completed", 00:15:29.797 "digest": "sha384", 00:15:29.797 "dhgroup": "ffdhe4096" 00:15:29.797 } 00:15:29.797 } 00:15:29.797 ]' 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:29.797 12:16:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:30.055 12:16:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:30.990 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:30.990 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:31.247 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:31.505 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:31.505 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:31.505 12:16:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:32.102 00:15:32.102 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:32.102 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:32.102 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:32.364 { 00:15:32.364 "cntlid": 81, 00:15:32.364 "qid": 0, 00:15:32.364 "state": "enabled", 00:15:32.364 "thread": "nvmf_tgt_poll_group_000", 00:15:32.364 "listen_address": { 00:15:32.364 "trtype": "TCP", 00:15:32.364 "adrfam": "IPv4", 00:15:32.364 "traddr": "10.0.0.2", 00:15:32.364 "trsvcid": "4420" 00:15:32.364 }, 00:15:32.364 "peer_address": { 00:15:32.364 "trtype": "TCP", 00:15:32.364 "adrfam": "IPv4", 00:15:32.364 "traddr": "10.0.0.1", 00:15:32.364 "trsvcid": "59396" 00:15:32.364 }, 00:15:32.364 "auth": { 00:15:32.364 "state": "completed", 00:15:32.364 "digest": "sha384", 00:15:32.364 "dhgroup": "ffdhe6144" 00:15:32.364 } 00:15:32.364 } 00:15:32.364 ]' 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:32.364 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:32.622 12:16:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:33.559 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.559 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:33.817 12:16:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:33.817 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:33.817 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:33.817 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:34.426 00:15:34.426 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:34.426 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:34.426 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:34.684 { 00:15:34.684 "cntlid": 83, 00:15:34.684 "qid": 0, 00:15:34.684 "state": "enabled", 00:15:34.684 "thread": "nvmf_tgt_poll_group_000", 00:15:34.684 "listen_address": { 00:15:34.684 "trtype": "TCP", 00:15:34.684 "adrfam": "IPv4", 00:15:34.684 "traddr": "10.0.0.2", 00:15:34.684 "trsvcid": "4420" 00:15:34.684 }, 00:15:34.684 "peer_address": { 00:15:34.684 "trtype": "TCP", 00:15:34.684 "adrfam": "IPv4", 00:15:34.684 "traddr": "10.0.0.1", 00:15:34.684 "trsvcid": "59436" 00:15:34.684 }, 00:15:34.684 "auth": { 00:15:34.684 "state": "completed", 00:15:34.684 "digest": "sha384", 00:15:34.684 "dhgroup": "ffdhe6144" 00:15:34.684 } 00:15:34.684 } 00:15:34.684 ]' 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:34.684 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:34.685 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:34.685 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:34.685 12:16:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:34.943 12:16:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:15:35.880 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:35.880 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:35.880 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:35.880 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:35.880 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:36.140 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:36.399 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:36.399 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.399 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:36.965 00:15:36.965 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:36.965 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:36.965 12:16:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:37.222 { 00:15:37.222 "cntlid": 85, 00:15:37.222 "qid": 0, 00:15:37.222 "state": "enabled", 00:15:37.222 "thread": "nvmf_tgt_poll_group_000", 00:15:37.222 "listen_address": { 00:15:37.222 "trtype": "TCP", 00:15:37.222 "adrfam": "IPv4", 00:15:37.222 "traddr": "10.0.0.2", 00:15:37.222 "trsvcid": "4420" 00:15:37.222 }, 00:15:37.222 "peer_address": { 00:15:37.222 "trtype": "TCP", 00:15:37.222 "adrfam": "IPv4", 00:15:37.222 "traddr": "10.0.0.1", 00:15:37.222 "trsvcid": "41328" 00:15:37.222 }, 00:15:37.222 "auth": { 00:15:37.222 "state": "completed", 00:15:37.222 "digest": "sha384", 00:15:37.222 "dhgroup": "ffdhe6144" 00:15:37.222 } 00:15:37.222 } 00:15:37.222 ]' 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:37.222 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:37.223 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:37.223 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:37.223 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:37.480 12:16:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:38.413 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:38.671 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:38.671 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:38.929 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:38.929 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:38.929 12:16:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:39.495 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:39.495 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:39.752 { 00:15:39.752 "cntlid": 87, 00:15:39.752 "qid": 0, 00:15:39.752 "state": "enabled", 00:15:39.752 "thread": "nvmf_tgt_poll_group_000", 00:15:39.752 "listen_address": { 00:15:39.752 "trtype": "TCP", 00:15:39.752 "adrfam": "IPv4", 00:15:39.752 "traddr": "10.0.0.2", 00:15:39.752 "trsvcid": "4420" 00:15:39.752 }, 00:15:39.752 "peer_address": { 00:15:39.752 "trtype": "TCP", 00:15:39.752 "adrfam": "IPv4", 00:15:39.752 "traddr": "10.0.0.1", 00:15:39.752 "trsvcid": "41372" 00:15:39.752 }, 00:15:39.752 "auth": { 00:15:39.752 "state": "completed", 00:15:39.752 "digest": "sha384", 00:15:39.752 "dhgroup": "ffdhe6144" 00:15:39.752 } 00:15:39.752 } 00:15:39.752 ]' 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:39.752 12:16:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:40.009 12:16:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:40.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:40.939 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:41.197 12:16:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:42.129 00:15:42.129 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:42.129 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:42.129 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:42.387 { 00:15:42.387 "cntlid": 89, 00:15:42.387 "qid": 0, 00:15:42.387 "state": "enabled", 00:15:42.387 "thread": "nvmf_tgt_poll_group_000", 00:15:42.387 "listen_address": { 00:15:42.387 "trtype": "TCP", 00:15:42.387 "adrfam": "IPv4", 00:15:42.387 "traddr": "10.0.0.2", 00:15:42.387 "trsvcid": "4420" 00:15:42.387 }, 00:15:42.387 "peer_address": { 00:15:42.387 "trtype": "TCP", 00:15:42.387 "adrfam": "IPv4", 00:15:42.387 "traddr": "10.0.0.1", 00:15:42.387 "trsvcid": "41394" 00:15:42.387 }, 00:15:42.387 "auth": { 00:15:42.387 "state": "completed", 00:15:42.387 "digest": "sha384", 00:15:42.387 "dhgroup": "ffdhe8192" 00:15:42.387 } 00:15:42.387 } 00:15:42.387 ]' 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:42.387 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:42.644 12:16:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:15:43.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:43.576 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:43.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:43.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:43.576 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.833 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:43.833 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:43.833 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:43.833 12:16:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:44.091 12:16:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.023 00:15:45.023 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:45.023 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:45.023 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:45.281 { 00:15:45.281 "cntlid": 91, 00:15:45.281 "qid": 0, 00:15:45.281 "state": "enabled", 00:15:45.281 "thread": "nvmf_tgt_poll_group_000", 00:15:45.281 "listen_address": { 00:15:45.281 "trtype": "TCP", 00:15:45.281 "adrfam": "IPv4", 00:15:45.281 "traddr": "10.0.0.2", 00:15:45.281 "trsvcid": "4420" 00:15:45.281 }, 00:15:45.281 "peer_address": { 00:15:45.281 "trtype": "TCP", 00:15:45.281 "adrfam": "IPv4", 00:15:45.281 "traddr": "10.0.0.1", 00:15:45.281 "trsvcid": "41426" 00:15:45.281 }, 00:15:45.281 "auth": { 00:15:45.281 "state": "completed", 00:15:45.281 "digest": "sha384", 00:15:45.281 "dhgroup": "ffdhe8192" 00:15:45.281 } 00:15:45.281 } 00:15:45.281 ]' 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:45.281 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.539 12:16:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:15:46.472 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:46.472 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:46.472 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:46.472 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.472 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:46.730 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.988 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:46.988 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:46.988 12:16:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.922 00:15:47.922 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:47.922 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:47.922 12:16:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:47.922 { 00:15:47.922 "cntlid": 93, 00:15:47.922 "qid": 0, 00:15:47.922 "state": "enabled", 00:15:47.922 "thread": "nvmf_tgt_poll_group_000", 00:15:47.922 "listen_address": { 00:15:47.922 "trtype": "TCP", 00:15:47.922 "adrfam": "IPv4", 00:15:47.922 "traddr": "10.0.0.2", 00:15:47.922 "trsvcid": "4420" 00:15:47.922 }, 00:15:47.922 "peer_address": { 00:15:47.922 "trtype": "TCP", 00:15:47.922 "adrfam": "IPv4", 00:15:47.922 "traddr": "10.0.0.1", 00:15:47.922 "trsvcid": "46528" 00:15:47.922 }, 00:15:47.922 "auth": { 00:15:47.922 "state": "completed", 00:15:47.922 "digest": "sha384", 00:15:47.922 "dhgroup": "ffdhe8192" 00:15:47.922 } 00:15:47.922 } 00:15:47.922 ]' 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:47.922 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:48.180 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:48.180 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:48.180 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.180 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.180 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.438 12:16:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:49.371 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.371 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:49.629 12:16:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:50.560 00:15:50.560 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:50.560 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:50.560 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:50.818 { 00:15:50.818 "cntlid": 95, 00:15:50.818 "qid": 0, 00:15:50.818 "state": "enabled", 00:15:50.818 "thread": "nvmf_tgt_poll_group_000", 00:15:50.818 "listen_address": { 00:15:50.818 "trtype": "TCP", 00:15:50.818 "adrfam": "IPv4", 00:15:50.818 "traddr": "10.0.0.2", 00:15:50.818 "trsvcid": "4420" 00:15:50.818 }, 00:15:50.818 "peer_address": { 00:15:50.818 "trtype": "TCP", 00:15:50.818 "adrfam": "IPv4", 00:15:50.818 "traddr": "10.0.0.1", 00:15:50.818 "trsvcid": "46556" 00:15:50.818 }, 00:15:50.818 "auth": { 00:15:50.818 "state": "completed", 00:15:50.818 "digest": "sha384", 00:15:50.818 "dhgroup": "ffdhe8192" 00:15:50.818 } 00:15:50.818 } 00:15:50.818 ]' 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:15:50.818 12:16:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:50.818 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:15:50.818 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:50.818 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:50.818 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:50.818 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.384 12:16:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:52.318 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.318 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:52.575 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.576 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:52.834 00:15:52.834 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:52.834 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.834 12:16:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:53.092 { 00:15:53.092 "cntlid": 97, 00:15:53.092 "qid": 0, 00:15:53.092 "state": "enabled", 00:15:53.092 "thread": "nvmf_tgt_poll_group_000", 00:15:53.092 "listen_address": { 00:15:53.092 "trtype": "TCP", 00:15:53.092 "adrfam": "IPv4", 00:15:53.092 "traddr": "10.0.0.2", 00:15:53.092 "trsvcid": "4420" 00:15:53.092 }, 00:15:53.092 "peer_address": { 00:15:53.092 "trtype": "TCP", 00:15:53.092 "adrfam": "IPv4", 00:15:53.092 "traddr": "10.0.0.1", 00:15:53.092 "trsvcid": "46576" 00:15:53.092 }, 00:15:53.092 "auth": { 00:15:53.092 "state": "completed", 00:15:53.092 "digest": "sha512", 00:15:53.092 "dhgroup": "null" 00:15:53.092 } 00:15:53.092 } 00:15:53.092 ]' 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:53.092 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:53.350 12:16:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:54.312 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.312 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:54.570 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:15:54.570 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.571 12:16:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:54.828 00:15:55.087 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:55.087 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.087 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:55.087 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.345 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:55.346 { 00:15:55.346 "cntlid": 99, 00:15:55.346 "qid": 0, 00:15:55.346 "state": "enabled", 00:15:55.346 "thread": "nvmf_tgt_poll_group_000", 00:15:55.346 "listen_address": { 00:15:55.346 "trtype": "TCP", 00:15:55.346 "adrfam": "IPv4", 00:15:55.346 "traddr": "10.0.0.2", 00:15:55.346 "trsvcid": "4420" 00:15:55.346 }, 00:15:55.346 "peer_address": { 00:15:55.346 "trtype": "TCP", 00:15:55.346 "adrfam": "IPv4", 00:15:55.346 "traddr": "10.0.0.1", 00:15:55.346 "trsvcid": "46604" 00:15:55.346 }, 00:15:55.346 "auth": { 00:15:55.346 "state": "completed", 00:15:55.346 "digest": "sha512", 00:15:55.346 "dhgroup": "null" 00:15:55.346 } 00:15:55.346 } 00:15:55.346 ]' 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.346 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:55.605 12:16:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.543 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:56.543 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:56.807 12:16:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:57.114 00:15:57.114 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:57.114 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.114 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:57.395 { 00:15:57.395 "cntlid": 101, 00:15:57.395 "qid": 0, 00:15:57.395 "state": "enabled", 00:15:57.395 "thread": "nvmf_tgt_poll_group_000", 00:15:57.395 "listen_address": { 00:15:57.395 "trtype": "TCP", 00:15:57.395 "adrfam": "IPv4", 00:15:57.395 "traddr": "10.0.0.2", 00:15:57.395 "trsvcid": "4420" 00:15:57.395 }, 00:15:57.395 "peer_address": { 00:15:57.395 "trtype": "TCP", 00:15:57.395 "adrfam": "IPv4", 00:15:57.395 "traddr": "10.0.0.1", 00:15:57.395 "trsvcid": "52668" 00:15:57.395 }, 00:15:57.395 "auth": { 00:15:57.395 "state": "completed", 00:15:57.395 "digest": "sha512", 00:15:57.395 "dhgroup": "null" 00:15:57.395 } 00:15:57.395 } 00:15:57.395 ]' 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.395 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.655 12:16:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:15:58.591 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.591 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.591 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:15:58.591 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:58.591 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.850 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:58.850 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:15:58.850 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:58.850 12:16:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.108 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:15:59.366 00:15:59.366 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:15:59.366 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:15:59.366 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:59.623 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:15:59.624 { 00:15:59.624 "cntlid": 103, 00:15:59.624 "qid": 0, 00:15:59.624 "state": "enabled", 00:15:59.624 "thread": "nvmf_tgt_poll_group_000", 00:15:59.624 "listen_address": { 00:15:59.624 "trtype": "TCP", 00:15:59.624 "adrfam": "IPv4", 00:15:59.624 "traddr": "10.0.0.2", 00:15:59.624 "trsvcid": "4420" 00:15:59.624 }, 00:15:59.624 "peer_address": { 00:15:59.624 "trtype": "TCP", 00:15:59.624 "adrfam": "IPv4", 00:15:59.624 "traddr": "10.0.0.1", 00:15:59.624 "trsvcid": "52696" 00:15:59.624 }, 00:15:59.624 "auth": { 00:15:59.624 "state": "completed", 00:15:59.624 "digest": "sha512", 00:15:59.624 "dhgroup": "null" 00:15:59.624 } 00:15:59.624 } 00:15:59.624 ]' 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:15:59.624 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:15:59.883 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.884 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.884 12:16:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.142 12:16:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.077 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.077 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:01.335 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:16:01.335 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:01.335 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:01.335 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.336 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:01.593 00:16:01.593 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:01.593 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:01.593 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.851 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.851 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.851 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:01.851 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.851 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:01.851 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:01.851 { 00:16:01.851 "cntlid": 105, 00:16:01.851 "qid": 0, 00:16:01.851 "state": "enabled", 00:16:01.851 "thread": "nvmf_tgt_poll_group_000", 00:16:01.851 "listen_address": { 00:16:01.851 "trtype": "TCP", 00:16:01.851 "adrfam": "IPv4", 00:16:01.851 "traddr": "10.0.0.2", 00:16:01.851 "trsvcid": "4420" 00:16:01.851 }, 00:16:01.851 "peer_address": { 00:16:01.851 "trtype": "TCP", 00:16:01.851 "adrfam": "IPv4", 00:16:01.851 "traddr": "10.0.0.1", 00:16:01.851 "trsvcid": "52734" 00:16:01.851 }, 00:16:01.851 "auth": { 00:16:01.851 "state": "completed", 00:16:01.851 "digest": "sha512", 00:16:01.851 "dhgroup": "ffdhe2048" 00:16:01.851 } 00:16:01.851 } 00:16:01.852 ]' 00:16:01.852 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:01.852 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:01.852 12:16:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:01.852 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:01.852 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:01.852 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:01.852 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:01.852 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.109 12:16:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:16:03.046 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.304 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.304 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:03.562 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:16:03.562 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.563 12:16:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:03.820 00:16:03.820 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:03.820 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:03.820 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:04.078 { 00:16:04.078 "cntlid": 107, 00:16:04.078 "qid": 0, 00:16:04.078 "state": "enabled", 00:16:04.078 "thread": "nvmf_tgt_poll_group_000", 00:16:04.078 "listen_address": { 00:16:04.078 "trtype": "TCP", 00:16:04.078 "adrfam": "IPv4", 00:16:04.078 "traddr": "10.0.0.2", 00:16:04.078 "trsvcid": "4420" 00:16:04.078 }, 00:16:04.078 "peer_address": { 00:16:04.078 "trtype": "TCP", 00:16:04.078 "adrfam": "IPv4", 00:16:04.078 "traddr": "10.0.0.1", 00:16:04.078 "trsvcid": "52774" 00:16:04.078 }, 00:16:04.078 "auth": { 00:16:04.078 "state": "completed", 00:16:04.078 "digest": "sha512", 00:16:04.078 "dhgroup": "ffdhe2048" 00:16:04.078 } 00:16:04.078 } 00:16:04.078 ]' 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:04.078 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:04.337 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:04.337 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:04.337 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:04.337 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:04.337 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.595 12:16:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:05.528 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.528 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:05.786 12:16:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.044 00:16:06.044 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:06.044 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:06.044 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:06.303 { 00:16:06.303 "cntlid": 109, 00:16:06.303 "qid": 0, 00:16:06.303 "state": "enabled", 00:16:06.303 "thread": "nvmf_tgt_poll_group_000", 00:16:06.303 "listen_address": { 00:16:06.303 "trtype": "TCP", 00:16:06.303 "adrfam": "IPv4", 00:16:06.303 "traddr": "10.0.0.2", 00:16:06.303 "trsvcid": "4420" 00:16:06.303 }, 00:16:06.303 "peer_address": { 00:16:06.303 "trtype": "TCP", 00:16:06.303 "adrfam": "IPv4", 00:16:06.303 "traddr": "10.0.0.1", 00:16:06.303 "trsvcid": "52808" 00:16:06.303 }, 00:16:06.303 "auth": { 00:16:06.303 "state": "completed", 00:16:06.303 "digest": "sha512", 00:16:06.303 "dhgroup": "ffdhe2048" 00:16:06.303 } 00:16:06.303 } 00:16:06.303 ]' 00:16:06.303 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:06.561 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:06.820 12:16:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.756 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:07.756 12:17:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.014 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:08.272 00:16:08.272 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:08.272 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:08.272 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:08.531 { 00:16:08.531 "cntlid": 111, 00:16:08.531 "qid": 0, 00:16:08.531 "state": "enabled", 00:16:08.531 "thread": "nvmf_tgt_poll_group_000", 00:16:08.531 "listen_address": { 00:16:08.531 "trtype": "TCP", 00:16:08.531 "adrfam": "IPv4", 00:16:08.531 "traddr": "10.0.0.2", 00:16:08.531 "trsvcid": "4420" 00:16:08.531 }, 00:16:08.531 "peer_address": { 00:16:08.531 "trtype": "TCP", 00:16:08.531 "adrfam": "IPv4", 00:16:08.531 "traddr": "10.0.0.1", 00:16:08.531 "trsvcid": "35100" 00:16:08.531 }, 00:16:08.531 "auth": { 00:16:08.531 "state": "completed", 00:16:08.531 "digest": "sha512", 00:16:08.531 "dhgroup": "ffdhe2048" 00:16:08.531 } 00:16:08.531 } 00:16:08.531 ]' 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:08.531 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:08.790 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:08.790 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:08.790 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.790 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.790 12:17:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:09.049 12:17:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.986 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:09.986 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.243 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.244 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.244 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:10.501 00:16:10.501 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:10.501 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:10.501 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.758 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.758 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.758 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:10.758 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.758 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:10.758 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:10.758 { 00:16:10.758 "cntlid": 113, 00:16:10.758 "qid": 0, 00:16:10.758 "state": "enabled", 00:16:10.758 "thread": "nvmf_tgt_poll_group_000", 00:16:10.758 "listen_address": { 00:16:10.758 "trtype": "TCP", 00:16:10.758 "adrfam": "IPv4", 00:16:10.758 "traddr": "10.0.0.2", 00:16:10.758 "trsvcid": "4420" 00:16:10.758 }, 00:16:10.758 "peer_address": { 00:16:10.758 "trtype": "TCP", 00:16:10.758 "adrfam": "IPv4", 00:16:10.758 "traddr": "10.0.0.1", 00:16:10.758 "trsvcid": "35128" 00:16:10.758 }, 00:16:10.758 "auth": { 00:16:10.758 "state": "completed", 00:16:10.758 "digest": "sha512", 00:16:10.758 "dhgroup": "ffdhe3072" 00:16:10.758 } 00:16:10.758 } 00:16:10.758 ]' 00:16:10.759 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:10.759 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:10.759 12:17:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:11.017 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:11.017 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:11.017 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.017 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.017 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.275 12:17:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.212 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.212 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.470 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:12.729 00:16:12.729 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:12.729 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:12.729 12:17:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:12.987 { 00:16:12.987 "cntlid": 115, 00:16:12.987 "qid": 0, 00:16:12.987 "state": "enabled", 00:16:12.987 "thread": "nvmf_tgt_poll_group_000", 00:16:12.987 "listen_address": { 00:16:12.987 "trtype": "TCP", 00:16:12.987 "adrfam": "IPv4", 00:16:12.987 "traddr": "10.0.0.2", 00:16:12.987 "trsvcid": "4420" 00:16:12.987 }, 00:16:12.987 "peer_address": { 00:16:12.987 "trtype": "TCP", 00:16:12.987 "adrfam": "IPv4", 00:16:12.987 "traddr": "10.0.0.1", 00:16:12.987 "trsvcid": "35152" 00:16:12.987 }, 00:16:12.987 "auth": { 00:16:12.987 "state": "completed", 00:16:12.987 "digest": "sha512", 00:16:12.987 "dhgroup": "ffdhe3072" 00:16:12.987 } 00:16:12.987 } 00:16:12.987 ]' 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:12.987 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:13.246 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.246 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.247 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.505 12:17:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.439 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.697 12:17:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:14.955 00:16:14.955 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:14.955 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:14.955 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:15.212 { 00:16:15.212 "cntlid": 117, 00:16:15.212 "qid": 0, 00:16:15.212 "state": "enabled", 00:16:15.212 "thread": "nvmf_tgt_poll_group_000", 00:16:15.212 "listen_address": { 00:16:15.212 "trtype": "TCP", 00:16:15.212 "adrfam": "IPv4", 00:16:15.212 "traddr": "10.0.0.2", 00:16:15.212 "trsvcid": "4420" 00:16:15.212 }, 00:16:15.212 "peer_address": { 00:16:15.212 "trtype": "TCP", 00:16:15.212 "adrfam": "IPv4", 00:16:15.212 "traddr": "10.0.0.1", 00:16:15.212 "trsvcid": "35180" 00:16:15.212 }, 00:16:15.212 "auth": { 00:16:15.212 "state": "completed", 00:16:15.212 "digest": "sha512", 00:16:15.212 "dhgroup": "ffdhe3072" 00:16:15.212 } 00:16:15.212 } 00:16:15.212 ]' 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:15.212 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:15.471 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:15.471 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:15.471 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.471 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.471 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.728 12:17:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.665 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.665 12:17:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:16.923 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:17.181 00:16:17.181 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:17.181 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:17.181 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:17.438 { 00:16:17.438 "cntlid": 119, 00:16:17.438 "qid": 0, 00:16:17.438 "state": "enabled", 00:16:17.438 "thread": "nvmf_tgt_poll_group_000", 00:16:17.438 "listen_address": { 00:16:17.438 "trtype": "TCP", 00:16:17.438 "adrfam": "IPv4", 00:16:17.438 "traddr": "10.0.0.2", 00:16:17.438 "trsvcid": "4420" 00:16:17.438 }, 00:16:17.438 "peer_address": { 00:16:17.438 "trtype": "TCP", 00:16:17.438 "adrfam": "IPv4", 00:16:17.438 "traddr": "10.0.0.1", 00:16:17.438 "trsvcid": "39606" 00:16:17.438 }, 00:16:17.438 "auth": { 00:16:17.438 "state": "completed", 00:16:17.438 "digest": "sha512", 00:16:17.438 "dhgroup": "ffdhe3072" 00:16:17.438 } 00:16:17.438 } 00:16:17.438 ]' 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:17.438 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:17.696 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:17.696 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:17.696 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.696 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.696 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.954 12:17:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:18.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:18.890 12:17:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.147 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:19.405 00:16:19.405 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:19.405 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:19.405 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:19.662 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:19.662 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:19.662 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:19.662 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.662 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:19.662 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:19.662 { 00:16:19.662 "cntlid": 121, 00:16:19.662 "qid": 0, 00:16:19.662 "state": "enabled", 00:16:19.662 "thread": "nvmf_tgt_poll_group_000", 00:16:19.662 "listen_address": { 00:16:19.662 "trtype": "TCP", 00:16:19.662 "adrfam": "IPv4", 00:16:19.662 "traddr": "10.0.0.2", 00:16:19.662 "trsvcid": "4420" 00:16:19.662 }, 00:16:19.662 "peer_address": { 00:16:19.662 "trtype": "TCP", 00:16:19.662 "adrfam": "IPv4", 00:16:19.662 "traddr": "10.0.0.1", 00:16:19.662 "trsvcid": "39620" 00:16:19.662 }, 00:16:19.663 "auth": { 00:16:19.663 "state": "completed", 00:16:19.663 "digest": "sha512", 00:16:19.663 "dhgroup": "ffdhe4096" 00:16:19.663 } 00:16:19.663 } 00:16:19.663 ]' 00:16:19.663 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:19.921 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:19.921 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:19.921 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:19.921 12:17:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:19.921 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:19.921 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:19.921 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.187 12:17:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.124 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.124 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.382 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:21.949 00:16:21.949 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:21.949 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:21.949 12:17:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:21.949 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:21.949 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:21.949 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:21.949 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:22.208 { 00:16:22.208 "cntlid": 123, 00:16:22.208 "qid": 0, 00:16:22.208 "state": "enabled", 00:16:22.208 "thread": "nvmf_tgt_poll_group_000", 00:16:22.208 "listen_address": { 00:16:22.208 "trtype": "TCP", 00:16:22.208 "adrfam": "IPv4", 00:16:22.208 "traddr": "10.0.0.2", 00:16:22.208 "trsvcid": "4420" 00:16:22.208 }, 00:16:22.208 "peer_address": { 00:16:22.208 "trtype": "TCP", 00:16:22.208 "adrfam": "IPv4", 00:16:22.208 "traddr": "10.0.0.1", 00:16:22.208 "trsvcid": "39660" 00:16:22.208 }, 00:16:22.208 "auth": { 00:16:22.208 "state": "completed", 00:16:22.208 "digest": "sha512", 00:16:22.208 "dhgroup": "ffdhe4096" 00:16:22.208 } 00:16:22.208 } 00:16:22.208 ]' 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.208 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.466 12:17:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.404 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.404 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:23.662 12:17:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:24.232 00:16:24.232 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:24.232 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:24.232 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:24.535 { 00:16:24.535 "cntlid": 125, 00:16:24.535 "qid": 0, 00:16:24.535 "state": "enabled", 00:16:24.535 "thread": "nvmf_tgt_poll_group_000", 00:16:24.535 "listen_address": { 00:16:24.535 "trtype": "TCP", 00:16:24.535 "adrfam": "IPv4", 00:16:24.535 "traddr": "10.0.0.2", 00:16:24.535 "trsvcid": "4420" 00:16:24.535 }, 00:16:24.535 "peer_address": { 00:16:24.535 "trtype": "TCP", 00:16:24.535 "adrfam": "IPv4", 00:16:24.535 "traddr": "10.0.0.1", 00:16:24.535 "trsvcid": "39698" 00:16:24.535 }, 00:16:24.535 "auth": { 00:16:24.535 "state": "completed", 00:16:24.535 "digest": "sha512", 00:16:24.535 "dhgroup": "ffdhe4096" 00:16:24.535 } 00:16:24.535 } 00:16:24.535 ]' 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.535 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.794 12:17:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:25.729 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.729 12:17:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:25.988 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:26.554 00:16:26.554 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:26.554 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:26.555 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:26.813 { 00:16:26.813 "cntlid": 127, 00:16:26.813 "qid": 0, 00:16:26.813 "state": "enabled", 00:16:26.813 "thread": "nvmf_tgt_poll_group_000", 00:16:26.813 "listen_address": { 00:16:26.813 "trtype": "TCP", 00:16:26.813 "adrfam": "IPv4", 00:16:26.813 "traddr": "10.0.0.2", 00:16:26.813 "trsvcid": "4420" 00:16:26.813 }, 00:16:26.813 "peer_address": { 00:16:26.813 "trtype": "TCP", 00:16:26.813 "adrfam": "IPv4", 00:16:26.813 "traddr": "10.0.0.1", 00:16:26.813 "trsvcid": "33834" 00:16:26.813 }, 00:16:26.813 "auth": { 00:16:26.813 "state": "completed", 00:16:26.813 "digest": "sha512", 00:16:26.813 "dhgroup": "ffdhe4096" 00:16:26.813 } 00:16:26.813 } 00:16:26.813 ]' 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:26.813 12:17:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.073 12:17:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:28.010 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.269 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.269 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:28.527 12:17:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.096 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.096 { 00:16:29.096 "cntlid": 129, 00:16:29.096 "qid": 0, 00:16:29.096 "state": "enabled", 00:16:29.096 "thread": "nvmf_tgt_poll_group_000", 00:16:29.096 "listen_address": { 00:16:29.096 "trtype": "TCP", 00:16:29.096 "adrfam": "IPv4", 00:16:29.096 "traddr": "10.0.0.2", 00:16:29.096 "trsvcid": "4420" 00:16:29.096 }, 00:16:29.096 "peer_address": { 00:16:29.096 "trtype": "TCP", 00:16:29.096 "adrfam": "IPv4", 00:16:29.096 "traddr": "10.0.0.1", 00:16:29.096 "trsvcid": "33872" 00:16:29.096 }, 00:16:29.096 "auth": { 00:16:29.096 "state": "completed", 00:16:29.096 "digest": "sha512", 00:16:29.096 "dhgroup": "ffdhe6144" 00:16:29.096 } 00:16:29.096 } 00:16:29.096 ]' 00:16:29.096 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.354 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.613 12:17:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.549 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.549 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:30.807 12:17:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.375 00:16:31.375 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.375 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.375 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:31.633 { 00:16:31.633 "cntlid": 131, 00:16:31.633 "qid": 0, 00:16:31.633 "state": "enabled", 00:16:31.633 "thread": "nvmf_tgt_poll_group_000", 00:16:31.633 "listen_address": { 00:16:31.633 "trtype": "TCP", 00:16:31.633 "adrfam": "IPv4", 00:16:31.633 "traddr": "10.0.0.2", 00:16:31.633 "trsvcid": "4420" 00:16:31.633 }, 00:16:31.633 "peer_address": { 00:16:31.633 "trtype": "TCP", 00:16:31.633 "adrfam": "IPv4", 00:16:31.633 "traddr": "10.0.0.1", 00:16:31.633 "trsvcid": "33912" 00:16:31.633 }, 00:16:31.633 "auth": { 00:16:31.633 "state": "completed", 00:16:31.633 "digest": "sha512", 00:16:31.633 "dhgroup": "ffdhe6144" 00:16:31.633 } 00:16:31.633 } 00:16:31.633 ]' 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:31.633 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:31.634 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:31.892 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:31.892 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:31.892 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.892 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.892 12:17:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.150 12:17:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:33.087 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.087 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.345 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:33.914 00:16:33.914 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.914 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.914 12:17:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:34.172 { 00:16:34.172 "cntlid": 133, 00:16:34.172 "qid": 0, 00:16:34.172 "state": "enabled", 00:16:34.172 "thread": "nvmf_tgt_poll_group_000", 00:16:34.172 "listen_address": { 00:16:34.172 "trtype": "TCP", 00:16:34.172 "adrfam": "IPv4", 00:16:34.172 "traddr": "10.0.0.2", 00:16:34.172 "trsvcid": "4420" 00:16:34.172 }, 00:16:34.172 "peer_address": { 00:16:34.172 "trtype": "TCP", 00:16:34.172 "adrfam": "IPv4", 00:16:34.172 "traddr": "10.0.0.1", 00:16:34.172 "trsvcid": "33936" 00:16:34.172 }, 00:16:34.172 "auth": { 00:16:34.172 "state": "completed", 00:16:34.172 "digest": "sha512", 00:16:34.172 "dhgroup": "ffdhe6144" 00:16:34.172 } 00:16:34.172 } 00:16:34.172 ]' 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.172 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.430 12:17:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.364 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.364 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:35.622 12:17:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:36.189 00:16:36.189 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.189 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.189 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.448 { 00:16:36.448 "cntlid": 135, 00:16:36.448 "qid": 0, 00:16:36.448 "state": "enabled", 00:16:36.448 "thread": "nvmf_tgt_poll_group_000", 00:16:36.448 "listen_address": { 00:16:36.448 "trtype": "TCP", 00:16:36.448 "adrfam": "IPv4", 00:16:36.448 "traddr": "10.0.0.2", 00:16:36.448 "trsvcid": "4420" 00:16:36.448 }, 00:16:36.448 "peer_address": { 00:16:36.448 "trtype": "TCP", 00:16:36.448 "adrfam": "IPv4", 00:16:36.448 "traddr": "10.0.0.1", 00:16:36.448 "trsvcid": "33970" 00:16:36.448 }, 00:16:36.448 "auth": { 00:16:36.448 "state": "completed", 00:16:36.448 "digest": "sha512", 00:16:36.448 "dhgroup": "ffdhe6144" 00:16:36.448 } 00:16:36.448 } 00:16:36.448 ]' 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:36.448 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.706 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:36.706 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.706 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.706 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.706 12:17:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.964 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:37.900 12:17:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.900 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:37.900 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:38.158 12:17:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.119 00:16:39.119 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.119 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.119 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.386 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.386 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.386 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:39.386 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.386 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:39.386 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.386 { 00:16:39.386 "cntlid": 137, 00:16:39.386 "qid": 0, 00:16:39.386 "state": "enabled", 00:16:39.386 "thread": "nvmf_tgt_poll_group_000", 00:16:39.386 "listen_address": { 00:16:39.386 "trtype": "TCP", 00:16:39.386 "adrfam": "IPv4", 00:16:39.386 "traddr": "10.0.0.2", 00:16:39.386 "trsvcid": "4420" 00:16:39.386 }, 00:16:39.387 "peer_address": { 00:16:39.387 "trtype": "TCP", 00:16:39.387 "adrfam": "IPv4", 00:16:39.387 "traddr": "10.0.0.1", 00:16:39.387 "trsvcid": "46592" 00:16:39.387 }, 00:16:39.387 "auth": { 00:16:39.387 "state": "completed", 00:16:39.387 "digest": "sha512", 00:16:39.387 "dhgroup": "ffdhe8192" 00:16:39.387 } 00:16:39.387 } 00:16:39.387 ]' 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.387 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.649 12:17:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.028 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.028 12:17:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:41.028 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:16:41.028 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:41.028 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.029 12:17:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:41.965 00:16:41.965 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:41.965 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:41.965 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.223 { 00:16:42.223 "cntlid": 139, 00:16:42.223 "qid": 0, 00:16:42.223 "state": "enabled", 00:16:42.223 "thread": "nvmf_tgt_poll_group_000", 00:16:42.223 "listen_address": { 00:16:42.223 "trtype": "TCP", 00:16:42.223 "adrfam": "IPv4", 00:16:42.223 "traddr": "10.0.0.2", 00:16:42.223 "trsvcid": "4420" 00:16:42.223 }, 00:16:42.223 "peer_address": { 00:16:42.223 "trtype": "TCP", 00:16:42.223 "adrfam": "IPv4", 00:16:42.223 "traddr": "10.0.0.1", 00:16:42.223 "trsvcid": "46628" 00:16:42.223 }, 00:16:42.223 "auth": { 00:16:42.223 "state": "completed", 00:16:42.223 "digest": "sha512", 00:16:42.223 "dhgroup": "ffdhe8192" 00:16:42.223 } 00:16:42.223 } 00:16:42.223 ]' 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.223 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.481 12:17:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:01:YzRmZDdhZDlhYmZiNWYzMTYxNWI3OTk3OWJiZTU4MmZK1Qza: --dhchap-ctrl-secret DHHC-1:02:OGEwY2Q3Yzk0M2E3MzdjMGQwNTMwYmZlY2Q3NGY5YWFhZWFiZDhlZWUzOTMwYWNjs/qMBw==: 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.411 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.411 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:43.667 12:17:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:44.604 00:16:44.604 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:44.604 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:44.604 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.862 12:17:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.862 { 00:16:44.862 "cntlid": 141, 00:16:44.862 "qid": 0, 00:16:44.862 "state": "enabled", 00:16:44.862 "thread": "nvmf_tgt_poll_group_000", 00:16:44.862 "listen_address": { 00:16:44.862 "trtype": "TCP", 00:16:44.862 "adrfam": "IPv4", 00:16:44.862 "traddr": "10.0.0.2", 00:16:44.862 "trsvcid": "4420" 00:16:44.862 }, 00:16:44.862 "peer_address": { 00:16:44.862 "trtype": "TCP", 00:16:44.862 "adrfam": "IPv4", 00:16:44.862 "traddr": "10.0.0.1", 00:16:44.862 "trsvcid": "46662" 00:16:44.862 }, 00:16:44.862 "auth": { 00:16:44.862 "state": "completed", 00:16:44.862 "digest": "sha512", 00:16:44.862 "dhgroup": "ffdhe8192" 00:16:44.862 } 00:16:44.862 } 00:16:44.862 ]' 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:44.862 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.120 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.120 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.120 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.378 12:17:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:02:YTExYWJjNzc2ZjFjOTE1NDdiMjVhYmVmYTYwN2FhOGRjYWUyMzk1Y2NmOWFiYjhkLU4BJw==: --dhchap-ctrl-secret DHHC-1:01:MGY4MDIwZTk2N2Y2NDFlODdjZjRhMGE4MjE4NGEwMTQ/W5n/: 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.314 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.314 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:16:46.572 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:46.573 12:17:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:47.509 00:16:47.509 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:47.509 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:47.509 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.509 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.509 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.510 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:47.510 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.510 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:47.510 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:47.510 { 00:16:47.510 "cntlid": 143, 00:16:47.510 "qid": 0, 00:16:47.510 "state": "enabled", 00:16:47.510 "thread": "nvmf_tgt_poll_group_000", 00:16:47.510 "listen_address": { 00:16:47.510 "trtype": "TCP", 00:16:47.510 "adrfam": "IPv4", 00:16:47.510 "traddr": "10.0.0.2", 00:16:47.510 "trsvcid": "4420" 00:16:47.510 }, 00:16:47.510 "peer_address": { 00:16:47.510 "trtype": "TCP", 00:16:47.510 "adrfam": "IPv4", 00:16:47.510 "traddr": "10.0.0.1", 00:16:47.510 "trsvcid": "44298" 00:16:47.510 }, 00:16:47.510 "auth": { 00:16:47.510 "state": "completed", 00:16:47.510 "digest": "sha512", 00:16:47.510 "dhgroup": "ffdhe8192" 00:16:47.510 } 00:16:47.510 } 00:16:47.510 ]' 00:16:47.510 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.767 12:17:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.025 12:17:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.963 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:48.963 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.221 12:17:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:50.170 00:16:50.170 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:50.170 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:50.170 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:50.432 { 00:16:50.432 "cntlid": 145, 00:16:50.432 "qid": 0, 00:16:50.432 "state": "enabled", 00:16:50.432 "thread": "nvmf_tgt_poll_group_000", 00:16:50.432 "listen_address": { 00:16:50.432 "trtype": "TCP", 00:16:50.432 "adrfam": "IPv4", 00:16:50.432 "traddr": "10.0.0.2", 00:16:50.432 "trsvcid": "4420" 00:16:50.432 }, 00:16:50.432 "peer_address": { 00:16:50.432 "trtype": "TCP", 00:16:50.432 "adrfam": "IPv4", 00:16:50.432 "traddr": "10.0.0.1", 00:16:50.432 "trsvcid": "44328" 00:16:50.432 }, 00:16:50.432 "auth": { 00:16:50.432 "state": "completed", 00:16:50.432 "digest": "sha512", 00:16:50.432 "dhgroup": "ffdhe8192" 00:16:50.432 } 00:16:50.432 } 00:16:50.432 ]' 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:50.432 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.690 12:17:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:00:YjEyNTY2ZDk3OWUxMGNkMThhZjI5MDZjZmI1ZTY3NDRjZTI2MDlmY2RlOTczMzZivoPQyQ==: --dhchap-ctrl-secret DHHC-1:03:YzkzZWRkYTlhYzQ1NTc3ZDNlMTkzMzE5NjMwNzRjNTcyNTU0M2RiMmZkNDcxODIwOGM2YmQ0MDYxZGI1MzNmNU8hLAE=: 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.625 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:51.625 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:51.626 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:51.626 12:17:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:16:52.560 request: 00:16:52.560 { 00:16:52.560 "name": "nvme0", 00:16:52.560 "trtype": "tcp", 00:16:52.560 "traddr": "10.0.0.2", 00:16:52.560 "adrfam": "ipv4", 00:16:52.560 "trsvcid": "4420", 00:16:52.560 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:52.560 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:52.560 "prchk_reftag": false, 00:16:52.560 "prchk_guard": false, 00:16:52.560 "hdgst": false, 00:16:52.560 "ddgst": false, 00:16:52.560 "dhchap_key": "key2", 00:16:52.560 "method": "bdev_nvme_attach_controller", 00:16:52.560 "req_id": 1 00:16:52.560 } 00:16:52.560 Got JSON-RPC error response 00:16:52.560 response: 00:16:52.560 { 00:16:52.560 "code": -5, 00:16:52.560 "message": "Input/output error" 00:16:52.560 } 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:52.560 12:17:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:16:53.496 request: 00:16:53.496 { 00:16:53.496 "name": "nvme0", 00:16:53.496 "trtype": "tcp", 00:16:53.496 "traddr": "10.0.0.2", 00:16:53.496 "adrfam": "ipv4", 00:16:53.496 "trsvcid": "4420", 00:16:53.496 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:53.496 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:53.496 "prchk_reftag": false, 00:16:53.496 "prchk_guard": false, 00:16:53.496 "hdgst": false, 00:16:53.496 "ddgst": false, 00:16:53.496 "dhchap_key": "key1", 00:16:53.496 "dhchap_ctrlr_key": "ckey2", 00:16:53.496 "method": "bdev_nvme_attach_controller", 00:16:53.496 "req_id": 1 00:16:53.496 } 00:16:53.496 Got JSON-RPC error response 00:16:53.496 response: 00:16:53.496 { 00:16:53.496 "code": -5, 00:16:53.496 "message": "Input/output error" 00:16:53.496 } 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key1 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:53.496 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:53.497 12:17:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:54.434 request: 00:16:54.434 { 00:16:54.434 "name": "nvme0", 00:16:54.434 "trtype": "tcp", 00:16:54.434 "traddr": "10.0.0.2", 00:16:54.434 "adrfam": "ipv4", 00:16:54.434 "trsvcid": "4420", 00:16:54.434 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:54.434 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:54.434 "prchk_reftag": false, 00:16:54.434 "prchk_guard": false, 00:16:54.434 "hdgst": false, 00:16:54.434 "ddgst": false, 00:16:54.434 "dhchap_key": "key1", 00:16:54.434 "dhchap_ctrlr_key": "ckey1", 00:16:54.434 "method": "bdev_nvme_attach_controller", 00:16:54.434 "req_id": 1 00:16:54.434 } 00:16:54.434 Got JSON-RPC error response 00:16:54.434 response: 00:16:54.434 { 00:16:54.434 "code": -5, 00:16:54.434 "message": "Input/output error" 00:16:54.434 } 00:16:54.434 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:54.434 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:54.434 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:54.434 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 2862883 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2862883 ']' 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2862883 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2862883 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2862883' 00:16:54.435 killing process with pid 2862883 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2862883 00:16:54.435 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2862883 00:16:54.693 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:16:54.693 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:54.693 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@724 -- # xtrace_disable 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=2885696 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 2885696 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2885696 ']' 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:54.694 12:17:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.663 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.663 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:55.663 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:55.663 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@730 -- # xtrace_disable 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 2885696 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@831 -- # '[' -z 2885696 ']' 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # local max_retries=100 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:55.664 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # xtrace_disable 00:16:55.664 12:17:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.923 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:16:55.923 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # return 0 00:16:55.923 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:16:55.923 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:55.923 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:56.182 12:17:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:57.120 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.120 { 00:16:57.120 "cntlid": 1, 00:16:57.120 "qid": 0, 00:16:57.120 "state": "enabled", 00:16:57.120 "thread": "nvmf_tgt_poll_group_000", 00:16:57.120 "listen_address": { 00:16:57.120 "trtype": "TCP", 00:16:57.120 "adrfam": "IPv4", 00:16:57.120 "traddr": "10.0.0.2", 00:16:57.120 "trsvcid": "4420" 00:16:57.120 }, 00:16:57.120 "peer_address": { 00:16:57.120 "trtype": "TCP", 00:16:57.120 "adrfam": "IPv4", 00:16:57.120 "traddr": "10.0.0.1", 00:16:57.120 "trsvcid": "57768" 00:16:57.120 }, 00:16:57.120 "auth": { 00:16:57.120 "state": "completed", 00:16:57.120 "digest": "sha512", 00:16:57.120 "dhgroup": "ffdhe8192" 00:16:57.120 } 00:16:57.120 } 00:16:57.120 ]' 00:16:57.120 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.378 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:57.636 12:17:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid 5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-secret DHHC-1:03:OWQzNTIwNmI3MzdiYmJhNTE1OGNjMjU3MGJhZTZmZDkxZDY4OGRhOGEzYjNlOWRkODY2ZDJmYjVmNjE2NDllMaR2Grg=: 00:16:58.570 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.570 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.570 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --dhchap-key key3 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:16:58.571 12:17:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:58.829 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.087 request: 00:16:59.087 { 00:16:59.087 "name": "nvme0", 00:16:59.087 "trtype": "tcp", 00:16:59.087 "traddr": "10.0.0.2", 00:16:59.087 "adrfam": "ipv4", 00:16:59.087 "trsvcid": "4420", 00:16:59.087 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.087 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.087 "prchk_reftag": false, 00:16:59.087 "prchk_guard": false, 00:16:59.087 "hdgst": false, 00:16:59.087 "ddgst": false, 00:16:59.087 "dhchap_key": "key3", 00:16:59.087 "method": "bdev_nvme_attach_controller", 00:16:59.087 "req_id": 1 00:16:59.087 } 00:16:59.087 Got JSON-RPC error response 00:16:59.087 response: 00:16:59.087 { 00:16:59.087 "code": -5, 00:16:59.087 "message": "Input/output error" 00:16:59.087 } 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:16:59.345 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.603 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:16:59.603 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.603 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:16:59.603 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:16:59.603 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.603 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:59.861 request: 00:16:59.861 { 00:16:59.861 "name": "nvme0", 00:16:59.861 "trtype": "tcp", 00:16:59.861 "traddr": "10.0.0.2", 00:16:59.861 "adrfam": "ipv4", 00:16:59.861 "trsvcid": "4420", 00:16:59.861 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:16:59.861 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:16:59.861 "prchk_reftag": false, 00:16:59.861 "prchk_guard": false, 00:16:59.861 "hdgst": false, 00:16:59.861 "ddgst": false, 00:16:59.861 "dhchap_key": "key3", 00:16:59.861 "method": "bdev_nvme_attach_controller", 00:16:59.861 "req_id": 1 00:16:59.861 } 00:16:59.861 Got JSON-RPC error response 00:16:59.861 response: 00:16:59.861 { 00:16:59.861 "code": -5, 00:16:59.861 "message": "Input/output error" 00:16:59.861 } 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:16:59.861 12:17:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # local es=0 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@638 -- # local arg=hostrpc 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # type -t hostrpc 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.118 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:00.376 request: 00:17:00.376 { 00:17:00.376 "name": "nvme0", 00:17:00.376 "trtype": "tcp", 00:17:00.376 "traddr": "10.0.0.2", 00:17:00.376 "adrfam": "ipv4", 00:17:00.376 "trsvcid": "4420", 00:17:00.376 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:00.376 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55", 00:17:00.376 "prchk_reftag": false, 00:17:00.376 "prchk_guard": false, 00:17:00.376 "hdgst": false, 00:17:00.376 "ddgst": false, 00:17:00.376 "dhchap_key": "key0", 00:17:00.376 "dhchap_ctrlr_key": "key1", 00:17:00.376 "method": "bdev_nvme_attach_controller", 00:17:00.376 "req_id": 1 00:17:00.376 } 00:17:00.376 Got JSON-RPC error response 00:17:00.376 response: 00:17:00.376 { 00:17:00.376 "code": -5, 00:17:00.376 "message": "Input/output error" 00:17:00.376 } 00:17:00.376 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@653 -- # es=1 00:17:00.376 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:00.376 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:00.376 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:00.376 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:00.376 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:17:00.634 00:17:00.634 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:17:00.634 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:17:00.634 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:00.892 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:00.892 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:00.892 12:17:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 2863042 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2863042 ']' 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2863042 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2863042 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2863042' 00:17:01.150 killing process with pid 2863042 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2863042 00:17:01.150 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2863042 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:01.717 rmmod nvme_tcp 00:17:01.717 rmmod nvme_fabrics 00:17:01.717 rmmod nvme_keyring 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 2885696 ']' 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 2885696 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@950 -- # '[' -z 2885696 ']' 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # kill -0 2885696 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # uname 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2885696 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2885696' 00:17:01.717 killing process with pid 2885696 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@969 -- # kill 2885696 00:17:01.717 12:17:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@974 -- # wait 2885696 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:01.976 12:17:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:03.877 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:03.877 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.tQP /tmp/spdk.key-sha256.eox /tmp/spdk.key-sha384.0LS /tmp/spdk.key-sha512.7Td /tmp/spdk.key-sha512.c4u /tmp/spdk.key-sha384.ebh /tmp/spdk.key-sha256.rYp '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:17:04.136 00:17:04.136 real 3m11.431s 00:17:04.136 user 7m24.699s 00:17:04.136 sys 0m25.133s 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.136 ************************************ 00:17:04.136 END TEST nvmf_auth_target 00:17:04.136 ************************************ 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 4 -le 1 ']' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:04.136 ************************************ 00:17:04.136 START TEST nvmf_bdevio_no_huge 00:17:04.136 ************************************ 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:17:04.136 * Looking for test storage... 00:17:04.136 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:17:04.136 12:17:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:06.036 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:06.037 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:06.037 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:06.037 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:06.037 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:06.037 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:06.295 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:06.295 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:06.295 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:06.295 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:06.295 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:06.295 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:06.295 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:17:06.295 00:17:06.295 --- 10.0.0.2 ping statistics --- 00:17:06.295 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.295 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:17:06.295 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:06.295 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:06.295 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.104 ms 00:17:06.295 00:17:06.296 --- 10.0.0.1 ping statistics --- 00:17:06.296 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:06.296 rtt min/avg/max/mdev = 0.104/0.104/0.104/0.000 ms 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=2888481 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 2888481 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@831 -- # '[' -z 2888481 ']' 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:06.296 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:06.296 12:17:59 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:06.296 [2024-07-26 12:17:59.419739] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:06.296 [2024-07-26 12:17:59.419820] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:17:06.296 [2024-07-26 12:17:59.493021] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:17:06.554 [2024-07-26 12:17:59.614698] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:06.554 [2024-07-26 12:17:59.614764] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:06.554 [2024-07-26 12:17:59.614780] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:06.554 [2024-07-26 12:17:59.614794] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:06.554 [2024-07-26 12:17:59.614806] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:06.554 [2024-07-26 12:17:59.614900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:17:06.554 [2024-07-26 12:17:59.614957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:17:06.554 [2024-07-26 12:17:59.615007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:17:06.554 [2024-07-26 12:17:59.615010] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:17:07.119 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:07.119 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # return 0 00:17:07.119 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:07.119 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:07.119 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 [2024-07-26 12:18:00.393633] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 Malloc0 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:07.378 [2024-07-26 12:18:00.431847] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:17:07.378 { 00:17:07.378 "params": { 00:17:07.378 "name": "Nvme$subsystem", 00:17:07.378 "trtype": "$TEST_TRANSPORT", 00:17:07.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:17:07.378 "adrfam": "ipv4", 00:17:07.378 "trsvcid": "$NVMF_PORT", 00:17:07.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:17:07.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:17:07.378 "hdgst": ${hdgst:-false}, 00:17:07.378 "ddgst": ${ddgst:-false} 00:17:07.378 }, 00:17:07.378 "method": "bdev_nvme_attach_controller" 00:17:07.378 } 00:17:07.378 EOF 00:17:07.378 )") 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:17:07.378 12:18:00 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:17:07.378 "params": { 00:17:07.378 "name": "Nvme1", 00:17:07.378 "trtype": "tcp", 00:17:07.378 "traddr": "10.0.0.2", 00:17:07.378 "adrfam": "ipv4", 00:17:07.378 "trsvcid": "4420", 00:17:07.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:07.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:07.378 "hdgst": false, 00:17:07.378 "ddgst": false 00:17:07.378 }, 00:17:07.378 "method": "bdev_nvme_attach_controller" 00:17:07.378 }' 00:17:07.378 [2024-07-26 12:18:00.477522] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:07.378 [2024-07-26 12:18:00.477606] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid2888677 ] 00:17:07.378 [2024-07-26 12:18:00.543614] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:07.636 [2024-07-26 12:18:00.658641] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:07.636 [2024-07-26 12:18:00.658691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:07.636 [2024-07-26 12:18:00.658694] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:17:07.636 I/O targets: 00:17:07.636 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:17:07.636 00:17:07.636 00:17:07.636 CUnit - A unit testing framework for C - Version 2.1-3 00:17:07.636 http://cunit.sourceforge.net/ 00:17:07.636 00:17:07.636 00:17:07.636 Suite: bdevio tests on: Nvme1n1 00:17:07.894 Test: blockdev write read block ...passed 00:17:07.894 Test: blockdev write zeroes read block ...passed 00:17:07.894 Test: blockdev write zeroes read no split ...passed 00:17:07.894 Test: blockdev write zeroes read split ...passed 00:17:07.894 Test: blockdev write zeroes read split partial ...passed 00:17:07.894 Test: blockdev reset ...[2024-07-26 12:18:01.067528] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:17:07.894 [2024-07-26 12:18:01.067645] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be9fb0 (9): Bad file descriptor 00:17:08.152 [2024-07-26 12:18:01.176439] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:17:08.152 passed 00:17:08.152 Test: blockdev write read 8 blocks ...passed 00:17:08.152 Test: blockdev write read size > 128k ...passed 00:17:08.152 Test: blockdev write read invalid size ...passed 00:17:08.152 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:17:08.152 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:17:08.152 Test: blockdev write read max offset ...passed 00:17:08.152 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:17:08.152 Test: blockdev writev readv 8 blocks ...passed 00:17:08.152 Test: blockdev writev readv 30 x 1block ...passed 00:17:08.152 Test: blockdev writev readv block ...passed 00:17:08.152 Test: blockdev writev readv size > 128k ...passed 00:17:08.152 Test: blockdev writev readv size > 128k in two iovs ...passed 00:17:08.152 Test: blockdev comparev and writev ...[2024-07-26 12:18:01.353088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.353123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.353147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.353165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.353523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.353548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.353570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.353585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.353938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.353962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.353983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.353999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.354338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.354362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:17:08.152 [2024-07-26 12:18:01.354384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:17:08.152 [2024-07-26 12:18:01.354399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:17:08.152 passed 00:17:08.410 Test: blockdev nvme passthru rw ...passed 00:17:08.410 Test: blockdev nvme passthru vendor specific ...[2024-07-26 12:18:01.437376] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.410 [2024-07-26 12:18:01.437404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:17:08.410 [2024-07-26 12:18:01.437597] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.410 [2024-07-26 12:18:01.437621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:17:08.410 [2024-07-26 12:18:01.437799] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.410 [2024-07-26 12:18:01.437822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:17:08.410 [2024-07-26 12:18:01.438000] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:17:08.410 [2024-07-26 12:18:01.438023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:17:08.410 passed 00:17:08.410 Test: blockdev nvme admin passthru ...passed 00:17:08.410 Test: blockdev copy ...passed 00:17:08.410 00:17:08.410 Run Summary: Type Total Ran Passed Failed Inactive 00:17:08.410 suites 1 1 n/a 0 0 00:17:08.410 tests 23 23 23 0 0 00:17:08.410 asserts 152 152 152 0 n/a 00:17:08.410 00:17:08.410 Elapsed time = 1.264 seconds 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@561 -- # xtrace_disable 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:17:08.668 rmmod nvme_tcp 00:17:08.668 rmmod nvme_fabrics 00:17:08.668 rmmod nvme_keyring 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 2888481 ']' 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 2888481 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@950 -- # '[' -z 2888481 ']' 00:17:08.668 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # kill -0 2888481 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # uname 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2888481 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@956 -- # process_name=reactor_3 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # '[' reactor_3 = sudo ']' 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2888481' 00:17:08.926 killing process with pid 2888481 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@969 -- # kill 2888481 00:17:08.926 12:18:01 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@974 -- # wait 2888481 00:17:09.184 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:17:09.184 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:17:09.184 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:17:09.184 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:17:09.185 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:17:09.185 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:09.185 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:09.185 12:18:02 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:17:11.722 00:17:11.722 real 0m7.228s 00:17:11.722 user 0m13.768s 00:17:11.722 sys 0m2.512s 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1126 -- # xtrace_disable 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 ************************************ 00:17:11.722 END TEST nvmf_bdevio_no_huge 00:17:11.722 ************************************ 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:17:11.722 ************************************ 00:17:11.722 START TEST nvmf_tls 00:17:11.722 ************************************ 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:17:11.722 * Looking for test storage... 00:17:11.722 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:17:11.722 12:18:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:17:13.125 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:17:13.125 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:17:13.125 Found net devices under 0000:0a:00.0: cvl_0_0 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:17:13.125 Found net devices under 0000:0a:00.1: cvl_0_1 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:17:13.125 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:17:13.126 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:17:13.384 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:17:13.385 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:17:13.385 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:17:13.385 00:17:13.385 --- 10.0.0.2 ping statistics --- 00:17:13.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.385 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:17:13.385 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:17:13.385 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.191 ms 00:17:13.385 00:17:13.385 --- 10.0.0.1 ping statistics --- 00:17:13.385 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:17:13.385 rtt min/avg/max/mdev = 0.191/0.191/0.191/0.000 ms 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2890875 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2890875 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2890875 ']' 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:13.385 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:13.385 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.385 [2024-07-26 12:18:06.569218] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:13.385 [2024-07-26 12:18:06.569300] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:13.385 EAL: No free 2048 kB hugepages reported on node 1 00:17:13.385 [2024-07-26 12:18:06.637693] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:13.643 [2024-07-26 12:18:06.748235] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:13.643 [2024-07-26 12:18:06.748295] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:13.643 [2024-07-26 12:18:06.748309] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:13.643 [2024-07-26 12:18:06.748320] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:13.643 [2024-07-26 12:18:06.748330] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:13.643 [2024-07-26 12:18:06.748356] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:17:13.643 12:18:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:17:13.900 true 00:17:13.900 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:13.900 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:17:14.158 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:17:14.158 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:17:14.158 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:14.416 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.416 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:17:14.673 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:17:14.673 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:17:14.673 12:18:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:17:14.931 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:14.931 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:17:15.189 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:17:15.189 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:17:15.189 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.189 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:17:15.446 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:17:15.446 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:17:15.446 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:17:15.703 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:15.703 12:18:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:17:15.961 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:17:15.961 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:17:15.961 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:17:16.218 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:17:16.218 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:17:16.477 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:16.735 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:16.735 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:17:16.735 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.xag5lhR2SI 00:17:16.735 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:17:16.735 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.PyR7ruTrXk 00:17:16.735 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:17:16.736 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:17:16.736 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.xag5lhR2SI 00:17:16.736 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.PyR7ruTrXk 00:17:16.736 12:18:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:17:16.994 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:17:17.252 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.xag5lhR2SI 00:17:17.252 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.xag5lhR2SI 00:17:17.252 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:17.511 [2024-07-26 12:18:10.619919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:17.511 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:17.769 12:18:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:18.027 [2024-07-26 12:18:11.149373] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:18.027 [2024-07-26 12:18:11.149606] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:18.027 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:18.285 malloc0 00:17:18.285 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:18.543 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xag5lhR2SI 00:17:18.801 [2024-07-26 12:18:11.918644] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:18.801 12:18:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.xag5lhR2SI 00:17:18.801 EAL: No free 2048 kB hugepages reported on node 1 00:17:28.801 Initializing NVMe Controllers 00:17:28.801 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:17:28.801 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:17:28.801 Initialization complete. Launching workers. 00:17:28.801 ======================================================== 00:17:28.801 Latency(us) 00:17:28.801 Device Information : IOPS MiB/s Average min max 00:17:28.801 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7763.54 30.33 8245.81 1225.62 10558.65 00:17:28.801 ======================================================== 00:17:28.801 Total : 7763.54 30.33 8245.81 1225.62 10558.65 00:17:28.801 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.xag5lhR2SI 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xag5lhR2SI' 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2893217 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2893217 /var/tmp/bdevperf.sock 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2893217 ']' 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:28.801 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:28.801 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:29.060 [2024-07-26 12:18:22.096092] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:29.060 [2024-07-26 12:18:22.096179] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2893217 ] 00:17:29.060 EAL: No free 2048 kB hugepages reported on node 1 00:17:29.060 [2024-07-26 12:18:22.152522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:29.060 [2024-07-26 12:18:22.258717] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:29.318 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:29.318 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:29.318 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xag5lhR2SI 00:17:29.576 [2024-07-26 12:18:22.601802] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:29.576 [2024-07-26 12:18:22.601916] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:29.576 TLSTESTn1 00:17:29.576 12:18:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:29.576 Running I/O for 10 seconds... 00:17:41.775 00:17:41.775 Latency(us) 00:17:41.775 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.775 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:41.775 Verification LBA range: start 0x0 length 0x2000 00:17:41.775 TLSTESTn1 : 10.04 2884.47 11.27 0.00 0.00 44266.42 10679.94 74177.04 00:17:41.775 =================================================================================================================== 00:17:41.775 Total : 2884.47 11.27 0.00 0.00 44266.42 10679.94 74177.04 00:17:41.775 0 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2893217 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2893217 ']' 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2893217 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2893217 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:41.775 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2893217' 00:17:41.775 killing process with pid 2893217 00:17:41.776 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2893217 00:17:41.776 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.776 00:17:41.776 Latency(us) 00:17:41.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.776 =================================================================================================================== 00:17:41.776 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:41.776 [2024-07-26 12:18:32.903897] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:41.776 12:18:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2893217 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PyR7ruTrXk 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PyR7ruTrXk 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.PyR7ruTrXk 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.PyR7ruTrXk' 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2894535 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2894535 /var/tmp/bdevperf.sock 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2894535 ']' 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.776 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.776 [2024-07-26 12:18:33.210735] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:41.776 [2024-07-26 12:18:33.210805] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894535 ] 00:17:41.776 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.776 [2024-07-26 12:18:33.269468] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.776 [2024-07-26 12:18:33.382290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PyR7ruTrXk 00:17:41.776 [2024-07-26 12:18:33.771967] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.776 [2024-07-26 12:18:33.772109] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.776 [2024-07-26 12:18:33.777621] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.776 [2024-07-26 12:18:33.778093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68cf90 (107): Transport endpoint is not connected 00:17:41.776 [2024-07-26 12:18:33.779070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x68cf90 (9): Bad file descriptor 00:17:41.776 [2024-07-26 12:18:33.780068] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.776 [2024-07-26 12:18:33.780105] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.776 [2024-07-26 12:18:33.780123] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.776 request: 00:17:41.776 { 00:17:41.776 "name": "TLSTEST", 00:17:41.776 "trtype": "tcp", 00:17:41.776 "traddr": "10.0.0.2", 00:17:41.776 "adrfam": "ipv4", 00:17:41.776 "trsvcid": "4420", 00:17:41.776 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.776 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:41.776 "prchk_reftag": false, 00:17:41.776 "prchk_guard": false, 00:17:41.776 "hdgst": false, 00:17:41.776 "ddgst": false, 00:17:41.776 "psk": "/tmp/tmp.PyR7ruTrXk", 00:17:41.776 "method": "bdev_nvme_attach_controller", 00:17:41.776 "req_id": 1 00:17:41.776 } 00:17:41.776 Got JSON-RPC error response 00:17:41.776 response: 00:17:41.776 { 00:17:41.776 "code": -5, 00:17:41.776 "message": "Input/output error" 00:17:41.776 } 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2894535 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2894535 ']' 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2894535 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894535 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894535' 00:17:41.776 killing process with pid 2894535 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2894535 00:17:41.776 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.776 00:17:41.776 Latency(us) 00:17:41.776 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.776 =================================================================================================================== 00:17:41.776 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.776 [2024-07-26 12:18:33.832565] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:41.776 12:18:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2894535 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xag5lhR2SI 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xag5lhR2SI 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.xag5lhR2SI 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xag5lhR2SI' 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.776 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2894661 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2894661 /var/tmp/bdevperf.sock 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2894661 ']' 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.777 [2024-07-26 12:18:34.133781] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:41.777 [2024-07-26 12:18:34.133868] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894661 ] 00:17:41.777 EAL: No free 2048 kB hugepages reported on node 1 00:17:41.777 [2024-07-26 12:18:34.191902] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:41.777 [2024-07-26 12:18:34.299924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.xag5lhR2SI 00:17:41.777 [2024-07-26 12:18:34.634741] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:41.777 [2024-07-26 12:18:34.634859] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:41.777 [2024-07-26 12:18:34.640264] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:41.777 [2024-07-26 12:18:34.640299] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:17:41.777 [2024-07-26 12:18:34.640361] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:41.777 [2024-07-26 12:18:34.640845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2369f90 (107): Transport endpoint is not connected 00:17:41.777 [2024-07-26 12:18:34.641834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2369f90 (9): Bad file descriptor 00:17:41.777 [2024-07-26 12:18:34.642832] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:41.777 [2024-07-26 12:18:34.642854] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:41.777 [2024-07-26 12:18:34.642871] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:41.777 request: 00:17:41.777 { 00:17:41.777 "name": "TLSTEST", 00:17:41.777 "trtype": "tcp", 00:17:41.777 "traddr": "10.0.0.2", 00:17:41.777 "adrfam": "ipv4", 00:17:41.777 "trsvcid": "4420", 00:17:41.777 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:41.777 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:17:41.777 "prchk_reftag": false, 00:17:41.777 "prchk_guard": false, 00:17:41.777 "hdgst": false, 00:17:41.777 "ddgst": false, 00:17:41.777 "psk": "/tmp/tmp.xag5lhR2SI", 00:17:41.777 "method": "bdev_nvme_attach_controller", 00:17:41.777 "req_id": 1 00:17:41.777 } 00:17:41.777 Got JSON-RPC error response 00:17:41.777 response: 00:17:41.777 { 00:17:41.777 "code": -5, 00:17:41.777 "message": "Input/output error" 00:17:41.777 } 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2894661 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2894661 ']' 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2894661 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894661 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894661' 00:17:41.777 killing process with pid 2894661 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2894661 00:17:41.777 Received shutdown signal, test time was about 10.000000 seconds 00:17:41.777 00:17:41.777 Latency(us) 00:17:41.777 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:41.777 =================================================================================================================== 00:17:41.777 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:41.777 [2024-07-26 12:18:34.695571] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2894661 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xag5lhR2SI 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xag5lhR2SI 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.xag5lhR2SI 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.xag5lhR2SI' 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2894691 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2894691 /var/tmp/bdevperf.sock 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2894691 ']' 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:41.777 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:41.777 12:18:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:41.777 [2024-07-26 12:18:35.005047] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:41.777 [2024-07-26 12:18:35.005145] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894691 ] 00:17:42.065 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.065 [2024-07-26 12:18:35.064676] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.065 [2024-07-26 12:18:35.170289] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:42.065 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:42.065 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:42.065 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.xag5lhR2SI 00:17:42.323 [2024-07-26 12:18:35.535791] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:42.323 [2024-07-26 12:18:35.535901] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:42.323 [2024-07-26 12:18:35.542111] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:42.323 [2024-07-26 12:18:35.542143] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:17:42.323 [2024-07-26 12:18:35.542181] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:42.323 [2024-07-26 12:18:35.542746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240df90 (107): Transport endpoint is not connected 00:17:42.323 [2024-07-26 12:18:35.543735] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x240df90 (9): Bad file descriptor 00:17:42.323 [2024-07-26 12:18:35.544735] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:17:42.323 [2024-07-26 12:18:35.544755] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:42.323 [2024-07-26 12:18:35.544773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:17:42.323 request: 00:17:42.323 { 00:17:42.323 "name": "TLSTEST", 00:17:42.323 "trtype": "tcp", 00:17:42.323 "traddr": "10.0.0.2", 00:17:42.323 "adrfam": "ipv4", 00:17:42.323 "trsvcid": "4420", 00:17:42.323 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:17:42.323 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:42.323 "prchk_reftag": false, 00:17:42.323 "prchk_guard": false, 00:17:42.323 "hdgst": false, 00:17:42.323 "ddgst": false, 00:17:42.323 "psk": "/tmp/tmp.xag5lhR2SI", 00:17:42.323 "method": "bdev_nvme_attach_controller", 00:17:42.323 "req_id": 1 00:17:42.323 } 00:17:42.323 Got JSON-RPC error response 00:17:42.323 response: 00:17:42.323 { 00:17:42.323 "code": -5, 00:17:42.323 "message": "Input/output error" 00:17:42.323 } 00:17:42.323 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2894691 00:17:42.323 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2894691 ']' 00:17:42.323 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2894691 00:17:42.323 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:42.323 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:42.323 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894691 00:17:42.581 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:42.581 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:42.581 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894691' 00:17:42.581 killing process with pid 2894691 00:17:42.581 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2894691 00:17:42.581 Received shutdown signal, test time was about 10.000000 seconds 00:17:42.581 00:17:42.581 Latency(us) 00:17:42.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:42.581 =================================================================================================================== 00:17:42.581 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:42.581 [2024-07-26 12:18:35.596379] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:42.581 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2894691 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2894833 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2894833 /var/tmp/bdevperf.sock 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2894833 ']' 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:42.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:42.839 12:18:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:42.839 [2024-07-26 12:18:35.904347] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:42.839 [2024-07-26 12:18:35.904436] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2894833 ] 00:17:42.839 EAL: No free 2048 kB hugepages reported on node 1 00:17:42.839 [2024-07-26 12:18:35.962170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:42.839 [2024-07-26 12:18:36.069235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:43.097 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:43.097 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:43.097 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:17:43.355 [2024-07-26 12:18:36.465220] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:17:43.355 [2024-07-26 12:18:36.467098] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1140770 (9): Bad file descriptor 00:17:43.355 [2024-07-26 12:18:36.468094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:17:43.355 [2024-07-26 12:18:36.468116] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:17:43.355 [2024-07-26 12:18:36.468133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:17:43.355 request: 00:17:43.355 { 00:17:43.355 "name": "TLSTEST", 00:17:43.355 "trtype": "tcp", 00:17:43.355 "traddr": "10.0.0.2", 00:17:43.355 "adrfam": "ipv4", 00:17:43.355 "trsvcid": "4420", 00:17:43.355 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:43.355 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:43.355 "prchk_reftag": false, 00:17:43.355 "prchk_guard": false, 00:17:43.355 "hdgst": false, 00:17:43.355 "ddgst": false, 00:17:43.355 "method": "bdev_nvme_attach_controller", 00:17:43.355 "req_id": 1 00:17:43.355 } 00:17:43.355 Got JSON-RPC error response 00:17:43.355 response: 00:17:43.355 { 00:17:43.355 "code": -5, 00:17:43.355 "message": "Input/output error" 00:17:43.355 } 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2894833 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2894833 ']' 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2894833 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894833 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894833' 00:17:43.355 killing process with pid 2894833 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2894833 00:17:43.355 Received shutdown signal, test time was about 10.000000 seconds 00:17:43.355 00:17:43.355 Latency(us) 00:17:43.355 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:43.355 =================================================================================================================== 00:17:43.355 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:43.355 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2894833 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 2890875 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2890875 ']' 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2890875 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2890875 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2890875' 00:17:43.613 killing process with pid 2890875 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2890875 00:17:43.613 [2024-07-26 12:18:36.800739] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:43.613 12:18:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2890875 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:17:43.873 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.gZbaxJR2Za 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.gZbaxJR2Za 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2894981 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2894981 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2894981 ']' 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:44.131 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.131 [2024-07-26 12:18:37.216971] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:44.132 [2024-07-26 12:18:37.217092] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:44.132 EAL: No free 2048 kB hugepages reported on node 1 00:17:44.132 [2024-07-26 12:18:37.286342] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:44.390 [2024-07-26 12:18:37.406777] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:44.390 [2024-07-26 12:18:37.406833] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:44.390 [2024-07-26 12:18:37.406850] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:44.390 [2024-07-26 12:18:37.406863] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:44.390 [2024-07-26 12:18:37.406875] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:44.390 [2024-07-26 12:18:37.406903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.gZbaxJR2Za 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gZbaxJR2Za 00:17:44.390 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:44.648 [2024-07-26 12:18:37.775212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:44.648 12:18:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:44.906 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:45.163 [2024-07-26 12:18:38.272584] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:45.163 [2024-07-26 12:18:38.272867] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:45.163 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:17:45.421 malloc0 00:17:45.421 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:17:45.679 12:18:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:17:45.937 [2024-07-26 12:18:39.066589] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZbaxJR2Za 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gZbaxJR2Za' 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2895267 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2895267 /var/tmp/bdevperf.sock 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2895267 ']' 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:45.937 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:45.937 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:45.937 [2024-07-26 12:18:39.127639] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:45.937 [2024-07-26 12:18:39.127719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2895267 ] 00:17:45.937 EAL: No free 2048 kB hugepages reported on node 1 00:17:46.195 [2024-07-26 12:18:39.192432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:46.195 [2024-07-26 12:18:39.301692] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:46.195 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:46.195 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:46.195 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:17:46.453 [2024-07-26 12:18:39.697251] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:46.453 [2024-07-26 12:18:39.697403] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:17:46.711 TLSTESTn1 00:17:46.711 12:18:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:17:46.711 Running I/O for 10 seconds... 00:17:58.909 00:17:58.909 Latency(us) 00:17:58.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.909 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:17:58.909 Verification LBA range: start 0x0 length 0x2000 00:17:58.909 TLSTESTn1 : 10.03 3062.67 11.96 0.00 0.00 41707.25 6359.42 63302.92 00:17:58.909 =================================================================================================================== 00:17:58.909 Total : 3062.67 11.96 0.00 0.00 41707.25 6359.42 63302.92 00:17:58.909 0 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 2895267 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2895267 ']' 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2895267 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.909 12:18:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2895267 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2895267' 00:17:58.909 killing process with pid 2895267 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2895267 00:17:58.909 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.909 00:17:58.909 Latency(us) 00:17:58.909 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.909 =================================================================================================================== 00:17:58.909 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:17:58.909 [2024-07-26 12:18:50.016385] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2895267 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.gZbaxJR2Za 00:17:58.909 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZbaxJR2Za 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZbaxJR2Za 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=run_bdevperf 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t run_bdevperf 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.gZbaxJR2Za 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.gZbaxJR2Za' 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=2896579 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 2896579 /var/tmp/bdevperf.sock 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2896579 ']' 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:17:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.910 [2024-07-26 12:18:50.331046] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:58.910 [2024-07-26 12:18:50.331142] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2896579 ] 00:17:58.910 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.910 [2024-07-26 12:18:50.388593] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.910 [2024-07-26 12:18:50.491098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:17:58.910 [2024-07-26 12:18:50.825975] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:17:58.910 [2024-07-26 12:18:50.826080] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:17:58.910 [2024-07-26 12:18:50.826097] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.gZbaxJR2Za 00:17:58.910 request: 00:17:58.910 { 00:17:58.910 "name": "TLSTEST", 00:17:58.910 "trtype": "tcp", 00:17:58.910 "traddr": "10.0.0.2", 00:17:58.910 "adrfam": "ipv4", 00:17:58.910 "trsvcid": "4420", 00:17:58.910 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:17:58.910 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:17:58.910 "prchk_reftag": false, 00:17:58.910 "prchk_guard": false, 00:17:58.910 "hdgst": false, 00:17:58.910 "ddgst": false, 00:17:58.910 "psk": "/tmp/tmp.gZbaxJR2Za", 00:17:58.910 "method": "bdev_nvme_attach_controller", 00:17:58.910 "req_id": 1 00:17:58.910 } 00:17:58.910 Got JSON-RPC error response 00:17:58.910 response: 00:17:58.910 { 00:17:58.910 "code": -1, 00:17:58.910 "message": "Operation not permitted" 00:17:58.910 } 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 2896579 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2896579 ']' 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2896579 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2896579 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2896579' 00:17:58.910 killing process with pid 2896579 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2896579 00:17:58.910 Received shutdown signal, test time was about 10.000000 seconds 00:17:58.910 00:17:58.910 Latency(us) 00:17:58.910 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:17:58.910 =================================================================================================================== 00:17:58.910 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:17:58.910 12:18:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2896579 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 2894981 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2894981 ']' 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2894981 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2894981 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2894981' 00:17:58.910 killing process with pid 2894981 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2894981 00:17:58.910 [2024-07-26 12:18:51.164228] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2894981 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2896726 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2896726 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2896726 ']' 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:58.910 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:17:58.910 12:18:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:58.910 [2024-07-26 12:18:51.493835] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:17:58.910 [2024-07-26 12:18:51.493923] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:17:58.910 EAL: No free 2048 kB hugepages reported on node 1 00:17:58.910 [2024-07-26 12:18:51.562556] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:58.910 [2024-07-26 12:18:51.678608] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:17:58.911 [2024-07-26 12:18:51.678672] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:17:58.911 [2024-07-26 12:18:51.678699] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:17:58.911 [2024-07-26 12:18:51.678712] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:17:58.911 [2024-07-26 12:18:51.678724] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:17:58.911 [2024-07-26 12:18:51.678762] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.gZbaxJR2Za 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # local es=0 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.gZbaxJR2Za 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@638 -- # local arg=setup_nvmf_tgt 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # type -t setup_nvmf_tgt 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # setup_nvmf_tgt /tmp/tmp.gZbaxJR2Za 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gZbaxJR2Za 00:17:59.478 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:17:59.736 [2024-07-26 12:18:52.746367] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:17:59.736 12:18:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:17:59.994 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:17:59.994 [2024-07-26 12:18:53.223594] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:17:59.994 [2024-07-26 12:18:53.223848] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:17:59.994 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:00.251 malloc0 00:18:00.251 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:00.508 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:18:00.765 [2024-07-26 12:18:53.965353] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:18:00.765 [2024-07-26 12:18:53.965397] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:18:00.765 [2024-07-26 12:18:53.965443] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:18:00.765 request: 00:18:00.765 { 00:18:00.765 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:00.765 "host": "nqn.2016-06.io.spdk:host1", 00:18:00.765 "psk": "/tmp/tmp.gZbaxJR2Za", 00:18:00.765 "method": "nvmf_subsystem_add_host", 00:18:00.765 "req_id": 1 00:18:00.765 } 00:18:00.765 Got JSON-RPC error response 00:18:00.765 response: 00:18:00.765 { 00:18:00.765 "code": -32603, 00:18:00.765 "message": "Internal error" 00:18:00.765 } 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@653 -- # es=1 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 2896726 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2896726 ']' 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2896726 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:00.765 12:18:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2896726 00:18:00.765 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:00.765 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:00.765 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2896726' 00:18:00.765 killing process with pid 2896726 00:18:00.765 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2896726 00:18:00.765 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2896726 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.gZbaxJR2Za 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2897037 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2897037 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2897037 ']' 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:01.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:01.329 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.329 [2024-07-26 12:18:54.360928] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:01.329 [2024-07-26 12:18:54.361019] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:01.329 EAL: No free 2048 kB hugepages reported on node 1 00:18:01.329 [2024-07-26 12:18:54.422480] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:01.329 [2024-07-26 12:18:54.528397] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:01.329 [2024-07-26 12:18:54.528460] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:01.329 [2024-07-26 12:18:54.528485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:01.329 [2024-07-26 12:18:54.528497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:01.329 [2024-07-26 12:18:54.528507] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:01.329 [2024-07-26 12:18:54.528547] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.gZbaxJR2Za 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gZbaxJR2Za 00:18:01.588 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:01.846 [2024-07-26 12:18:54.881897] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:01.846 12:18:54 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:02.103 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:02.361 [2024-07-26 12:18:55.367148] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:02.361 [2024-07-26 12:18:55.367399] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:02.361 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:02.619 malloc0 00:18:02.619 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:02.878 12:18:55 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:18:02.878 [2024-07-26 12:18:56.113109] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=2897320 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 2897320 /var/tmp/bdevperf.sock 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2897320 ']' 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:03.136 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:03.136 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:03.136 [2024-07-26 12:18:56.170237] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:03.136 [2024-07-26 12:18:56.170325] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897320 ] 00:18:03.136 EAL: No free 2048 kB hugepages reported on node 1 00:18:03.136 [2024-07-26 12:18:56.229446] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:03.136 [2024-07-26 12:18:56.338470] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:03.394 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:03.394 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:03.394 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:18:03.651 [2024-07-26 12:18:56.662161] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:03.651 [2024-07-26 12:18:56.662287] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:03.651 TLSTESTn1 00:18:03.651 12:18:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:18:03.909 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:18:03.909 "subsystems": [ 00:18:03.909 { 00:18:03.909 "subsystem": "keyring", 00:18:03.909 "config": [] 00:18:03.909 }, 00:18:03.909 { 00:18:03.909 "subsystem": "iobuf", 00:18:03.909 "config": [ 00:18:03.909 { 00:18:03.909 "method": "iobuf_set_options", 00:18:03.909 "params": { 00:18:03.909 "small_pool_count": 8192, 00:18:03.909 "large_pool_count": 1024, 00:18:03.909 "small_bufsize": 8192, 00:18:03.909 "large_bufsize": 135168 00:18:03.909 } 00:18:03.909 } 00:18:03.909 ] 00:18:03.909 }, 00:18:03.909 { 00:18:03.909 "subsystem": "sock", 00:18:03.909 "config": [ 00:18:03.909 { 00:18:03.909 "method": "sock_set_default_impl", 00:18:03.909 "params": { 00:18:03.909 "impl_name": "posix" 00:18:03.909 } 00:18:03.909 }, 00:18:03.909 { 00:18:03.909 "method": "sock_impl_set_options", 00:18:03.909 "params": { 00:18:03.909 "impl_name": "ssl", 00:18:03.909 "recv_buf_size": 4096, 00:18:03.909 "send_buf_size": 4096, 00:18:03.909 "enable_recv_pipe": true, 00:18:03.909 "enable_quickack": false, 00:18:03.909 "enable_placement_id": 0, 00:18:03.909 "enable_zerocopy_send_server": true, 00:18:03.909 "enable_zerocopy_send_client": false, 00:18:03.909 "zerocopy_threshold": 0, 00:18:03.909 "tls_version": 0, 00:18:03.909 "enable_ktls": false 00:18:03.909 } 00:18:03.909 }, 00:18:03.909 { 00:18:03.909 "method": "sock_impl_set_options", 00:18:03.909 "params": { 00:18:03.909 "impl_name": "posix", 00:18:03.909 "recv_buf_size": 2097152, 00:18:03.909 "send_buf_size": 2097152, 00:18:03.909 "enable_recv_pipe": true, 00:18:03.909 "enable_quickack": false, 00:18:03.909 "enable_placement_id": 0, 00:18:03.909 "enable_zerocopy_send_server": true, 00:18:03.909 "enable_zerocopy_send_client": false, 00:18:03.910 "zerocopy_threshold": 0, 00:18:03.910 "tls_version": 0, 00:18:03.910 "enable_ktls": false 00:18:03.910 } 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "subsystem": "vmd", 00:18:03.910 "config": [] 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "subsystem": "accel", 00:18:03.910 "config": [ 00:18:03.910 { 00:18:03.910 "method": "accel_set_options", 00:18:03.910 "params": { 00:18:03.910 "small_cache_size": 128, 00:18:03.910 "large_cache_size": 16, 00:18:03.910 "task_count": 2048, 00:18:03.910 "sequence_count": 2048, 00:18:03.910 "buf_count": 2048 00:18:03.910 } 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "subsystem": "bdev", 00:18:03.910 "config": [ 00:18:03.910 { 00:18:03.910 "method": "bdev_set_options", 00:18:03.910 "params": { 00:18:03.910 "bdev_io_pool_size": 65535, 00:18:03.910 "bdev_io_cache_size": 256, 00:18:03.910 "bdev_auto_examine": true, 00:18:03.910 "iobuf_small_cache_size": 128, 00:18:03.910 "iobuf_large_cache_size": 16 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "bdev_raid_set_options", 00:18:03.910 "params": { 00:18:03.910 "process_window_size_kb": 1024, 00:18:03.910 "process_max_bandwidth_mb_sec": 0 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "bdev_iscsi_set_options", 00:18:03.910 "params": { 00:18:03.910 "timeout_sec": 30 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "bdev_nvme_set_options", 00:18:03.910 "params": { 00:18:03.910 "action_on_timeout": "none", 00:18:03.910 "timeout_us": 0, 00:18:03.910 "timeout_admin_us": 0, 00:18:03.910 "keep_alive_timeout_ms": 10000, 00:18:03.910 "arbitration_burst": 0, 00:18:03.910 "low_priority_weight": 0, 00:18:03.910 "medium_priority_weight": 0, 00:18:03.910 "high_priority_weight": 0, 00:18:03.910 "nvme_adminq_poll_period_us": 10000, 00:18:03.910 "nvme_ioq_poll_period_us": 0, 00:18:03.910 "io_queue_requests": 0, 00:18:03.910 "delay_cmd_submit": true, 00:18:03.910 "transport_retry_count": 4, 00:18:03.910 "bdev_retry_count": 3, 00:18:03.910 "transport_ack_timeout": 0, 00:18:03.910 "ctrlr_loss_timeout_sec": 0, 00:18:03.910 "reconnect_delay_sec": 0, 00:18:03.910 "fast_io_fail_timeout_sec": 0, 00:18:03.910 "disable_auto_failback": false, 00:18:03.910 "generate_uuids": false, 00:18:03.910 "transport_tos": 0, 00:18:03.910 "nvme_error_stat": false, 00:18:03.910 "rdma_srq_size": 0, 00:18:03.910 "io_path_stat": false, 00:18:03.910 "allow_accel_sequence": false, 00:18:03.910 "rdma_max_cq_size": 0, 00:18:03.910 "rdma_cm_event_timeout_ms": 0, 00:18:03.910 "dhchap_digests": [ 00:18:03.910 "sha256", 00:18:03.910 "sha384", 00:18:03.910 "sha512" 00:18:03.910 ], 00:18:03.910 "dhchap_dhgroups": [ 00:18:03.910 "null", 00:18:03.910 "ffdhe2048", 00:18:03.910 "ffdhe3072", 00:18:03.910 "ffdhe4096", 00:18:03.910 "ffdhe6144", 00:18:03.910 "ffdhe8192" 00:18:03.910 ] 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "bdev_nvme_set_hotplug", 00:18:03.910 "params": { 00:18:03.910 "period_us": 100000, 00:18:03.910 "enable": false 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "bdev_malloc_create", 00:18:03.910 "params": { 00:18:03.910 "name": "malloc0", 00:18:03.910 "num_blocks": 8192, 00:18:03.910 "block_size": 4096, 00:18:03.910 "physical_block_size": 4096, 00:18:03.910 "uuid": "e4da1035-666f-4e21-b6de-0ae87a7e0767", 00:18:03.910 "optimal_io_boundary": 0, 00:18:03.910 "md_size": 0, 00:18:03.910 "dif_type": 0, 00:18:03.910 "dif_is_head_of_md": false, 00:18:03.910 "dif_pi_format": 0 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "bdev_wait_for_examine" 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "subsystem": "nbd", 00:18:03.910 "config": [] 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "subsystem": "scheduler", 00:18:03.910 "config": [ 00:18:03.910 { 00:18:03.910 "method": "framework_set_scheduler", 00:18:03.910 "params": { 00:18:03.910 "name": "static" 00:18:03.910 } 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "subsystem": "nvmf", 00:18:03.910 "config": [ 00:18:03.910 { 00:18:03.910 "method": "nvmf_set_config", 00:18:03.910 "params": { 00:18:03.910 "discovery_filter": "match_any", 00:18:03.910 "admin_cmd_passthru": { 00:18:03.910 "identify_ctrlr": false 00:18:03.910 } 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_set_max_subsystems", 00:18:03.910 "params": { 00:18:03.910 "max_subsystems": 1024 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_set_crdt", 00:18:03.910 "params": { 00:18:03.910 "crdt1": 0, 00:18:03.910 "crdt2": 0, 00:18:03.910 "crdt3": 0 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_create_transport", 00:18:03.910 "params": { 00:18:03.910 "trtype": "TCP", 00:18:03.910 "max_queue_depth": 128, 00:18:03.910 "max_io_qpairs_per_ctrlr": 127, 00:18:03.910 "in_capsule_data_size": 4096, 00:18:03.910 "max_io_size": 131072, 00:18:03.910 "io_unit_size": 131072, 00:18:03.910 "max_aq_depth": 128, 00:18:03.910 "num_shared_buffers": 511, 00:18:03.910 "buf_cache_size": 4294967295, 00:18:03.910 "dif_insert_or_strip": false, 00:18:03.910 "zcopy": false, 00:18:03.910 "c2h_success": false, 00:18:03.910 "sock_priority": 0, 00:18:03.910 "abort_timeout_sec": 1, 00:18:03.910 "ack_timeout": 0, 00:18:03.910 "data_wr_pool_size": 0 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_create_subsystem", 00:18:03.910 "params": { 00:18:03.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.910 "allow_any_host": false, 00:18:03.910 "serial_number": "SPDK00000000000001", 00:18:03.910 "model_number": "SPDK bdev Controller", 00:18:03.910 "max_namespaces": 10, 00:18:03.910 "min_cntlid": 1, 00:18:03.910 "max_cntlid": 65519, 00:18:03.910 "ana_reporting": false 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_subsystem_add_host", 00:18:03.910 "params": { 00:18:03.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.910 "host": "nqn.2016-06.io.spdk:host1", 00:18:03.910 "psk": "/tmp/tmp.gZbaxJR2Za" 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_subsystem_add_ns", 00:18:03.910 "params": { 00:18:03.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.910 "namespace": { 00:18:03.910 "nsid": 1, 00:18:03.910 "bdev_name": "malloc0", 00:18:03.910 "nguid": "E4DA1035666F4E21B6DE0AE87A7E0767", 00:18:03.910 "uuid": "e4da1035-666f-4e21-b6de-0ae87a7e0767", 00:18:03.910 "no_auto_visible": false 00:18:03.910 } 00:18:03.910 } 00:18:03.910 }, 00:18:03.910 { 00:18:03.910 "method": "nvmf_subsystem_add_listener", 00:18:03.910 "params": { 00:18:03.910 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:03.910 "listen_address": { 00:18:03.910 "trtype": "TCP", 00:18:03.910 "adrfam": "IPv4", 00:18:03.910 "traddr": "10.0.0.2", 00:18:03.910 "trsvcid": "4420" 00:18:03.910 }, 00:18:03.910 "secure_channel": true 00:18:03.910 } 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 } 00:18:03.910 ] 00:18:03.910 }' 00:18:03.910 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:04.170 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:18:04.170 "subsystems": [ 00:18:04.170 { 00:18:04.170 "subsystem": "keyring", 00:18:04.170 "config": [] 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "subsystem": "iobuf", 00:18:04.170 "config": [ 00:18:04.170 { 00:18:04.170 "method": "iobuf_set_options", 00:18:04.170 "params": { 00:18:04.170 "small_pool_count": 8192, 00:18:04.170 "large_pool_count": 1024, 00:18:04.170 "small_bufsize": 8192, 00:18:04.170 "large_bufsize": 135168 00:18:04.170 } 00:18:04.170 } 00:18:04.170 ] 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "subsystem": "sock", 00:18:04.170 "config": [ 00:18:04.170 { 00:18:04.170 "method": "sock_set_default_impl", 00:18:04.170 "params": { 00:18:04.170 "impl_name": "posix" 00:18:04.170 } 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "method": "sock_impl_set_options", 00:18:04.170 "params": { 00:18:04.170 "impl_name": "ssl", 00:18:04.170 "recv_buf_size": 4096, 00:18:04.170 "send_buf_size": 4096, 00:18:04.170 "enable_recv_pipe": true, 00:18:04.170 "enable_quickack": false, 00:18:04.170 "enable_placement_id": 0, 00:18:04.170 "enable_zerocopy_send_server": true, 00:18:04.170 "enable_zerocopy_send_client": false, 00:18:04.170 "zerocopy_threshold": 0, 00:18:04.170 "tls_version": 0, 00:18:04.170 "enable_ktls": false 00:18:04.170 } 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "method": "sock_impl_set_options", 00:18:04.170 "params": { 00:18:04.170 "impl_name": "posix", 00:18:04.170 "recv_buf_size": 2097152, 00:18:04.170 "send_buf_size": 2097152, 00:18:04.170 "enable_recv_pipe": true, 00:18:04.170 "enable_quickack": false, 00:18:04.170 "enable_placement_id": 0, 00:18:04.170 "enable_zerocopy_send_server": true, 00:18:04.170 "enable_zerocopy_send_client": false, 00:18:04.170 "zerocopy_threshold": 0, 00:18:04.170 "tls_version": 0, 00:18:04.170 "enable_ktls": false 00:18:04.170 } 00:18:04.170 } 00:18:04.170 ] 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "subsystem": "vmd", 00:18:04.170 "config": [] 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "subsystem": "accel", 00:18:04.170 "config": [ 00:18:04.170 { 00:18:04.170 "method": "accel_set_options", 00:18:04.170 "params": { 00:18:04.170 "small_cache_size": 128, 00:18:04.170 "large_cache_size": 16, 00:18:04.170 "task_count": 2048, 00:18:04.170 "sequence_count": 2048, 00:18:04.170 "buf_count": 2048 00:18:04.170 } 00:18:04.170 } 00:18:04.170 ] 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "subsystem": "bdev", 00:18:04.170 "config": [ 00:18:04.170 { 00:18:04.170 "method": "bdev_set_options", 00:18:04.170 "params": { 00:18:04.170 "bdev_io_pool_size": 65535, 00:18:04.170 "bdev_io_cache_size": 256, 00:18:04.170 "bdev_auto_examine": true, 00:18:04.170 "iobuf_small_cache_size": 128, 00:18:04.170 "iobuf_large_cache_size": 16 00:18:04.170 } 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "method": "bdev_raid_set_options", 00:18:04.170 "params": { 00:18:04.170 "process_window_size_kb": 1024, 00:18:04.170 "process_max_bandwidth_mb_sec": 0 00:18:04.170 } 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "method": "bdev_iscsi_set_options", 00:18:04.170 "params": { 00:18:04.170 "timeout_sec": 30 00:18:04.170 } 00:18:04.170 }, 00:18:04.170 { 00:18:04.170 "method": "bdev_nvme_set_options", 00:18:04.170 "params": { 00:18:04.170 "action_on_timeout": "none", 00:18:04.170 "timeout_us": 0, 00:18:04.170 "timeout_admin_us": 0, 00:18:04.170 "keep_alive_timeout_ms": 10000, 00:18:04.170 "arbitration_burst": 0, 00:18:04.170 "low_priority_weight": 0, 00:18:04.170 "medium_priority_weight": 0, 00:18:04.170 "high_priority_weight": 0, 00:18:04.170 "nvme_adminq_poll_period_us": 10000, 00:18:04.170 "nvme_ioq_poll_period_us": 0, 00:18:04.170 "io_queue_requests": 512, 00:18:04.170 "delay_cmd_submit": true, 00:18:04.170 "transport_retry_count": 4, 00:18:04.170 "bdev_retry_count": 3, 00:18:04.170 "transport_ack_timeout": 0, 00:18:04.170 "ctrlr_loss_timeout_sec": 0, 00:18:04.170 "reconnect_delay_sec": 0, 00:18:04.170 "fast_io_fail_timeout_sec": 0, 00:18:04.170 "disable_auto_failback": false, 00:18:04.170 "generate_uuids": false, 00:18:04.170 "transport_tos": 0, 00:18:04.170 "nvme_error_stat": false, 00:18:04.170 "rdma_srq_size": 0, 00:18:04.170 "io_path_stat": false, 00:18:04.170 "allow_accel_sequence": false, 00:18:04.170 "rdma_max_cq_size": 0, 00:18:04.170 "rdma_cm_event_timeout_ms": 0, 00:18:04.170 "dhchap_digests": [ 00:18:04.170 "sha256", 00:18:04.170 "sha384", 00:18:04.170 "sha512" 00:18:04.170 ], 00:18:04.170 "dhchap_dhgroups": [ 00:18:04.170 "null", 00:18:04.170 "ffdhe2048", 00:18:04.170 "ffdhe3072", 00:18:04.170 "ffdhe4096", 00:18:04.170 "ffdhe6144", 00:18:04.170 "ffdhe8192" 00:18:04.170 ] 00:18:04.170 } 00:18:04.171 }, 00:18:04.171 { 00:18:04.171 "method": "bdev_nvme_attach_controller", 00:18:04.171 "params": { 00:18:04.171 "name": "TLSTEST", 00:18:04.171 "trtype": "TCP", 00:18:04.171 "adrfam": "IPv4", 00:18:04.171 "traddr": "10.0.0.2", 00:18:04.171 "trsvcid": "4420", 00:18:04.171 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.171 "prchk_reftag": false, 00:18:04.171 "prchk_guard": false, 00:18:04.171 "ctrlr_loss_timeout_sec": 0, 00:18:04.171 "reconnect_delay_sec": 0, 00:18:04.171 "fast_io_fail_timeout_sec": 0, 00:18:04.171 "psk": "/tmp/tmp.gZbaxJR2Za", 00:18:04.171 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:04.171 "hdgst": false, 00:18:04.171 "ddgst": false 00:18:04.171 } 00:18:04.171 }, 00:18:04.171 { 00:18:04.171 "method": "bdev_nvme_set_hotplug", 00:18:04.171 "params": { 00:18:04.171 "period_us": 100000, 00:18:04.171 "enable": false 00:18:04.171 } 00:18:04.171 }, 00:18:04.171 { 00:18:04.171 "method": "bdev_wait_for_examine" 00:18:04.171 } 00:18:04.171 ] 00:18:04.171 }, 00:18:04.171 { 00:18:04.171 "subsystem": "nbd", 00:18:04.171 "config": [] 00:18:04.171 } 00:18:04.171 ] 00:18:04.171 }' 00:18:04.171 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 2897320 00:18:04.171 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2897320 ']' 00:18:04.171 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2897320 00:18:04.171 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:04.171 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.171 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897320 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897320' 00:18:04.464 killing process with pid 2897320 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2897320 00:18:04.464 Received shutdown signal, test time was about 10.000000 seconds 00:18:04.464 00:18:04.464 Latency(us) 00:18:04.464 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:04.464 =================================================================================================================== 00:18:04.464 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:04.464 [2024-07-26 12:18:57.437959] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2897320 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 2897037 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2897037 ']' 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2897037 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:04.464 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897037 00:18:04.722 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:04.722 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:04.722 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897037' 00:18:04.722 killing process with pid 2897037 00:18:04.722 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2897037 00:18:04.722 [2024-07-26 12:18:57.712953] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:04.722 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2897037 00:18:04.980 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:18:04.980 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:04.980 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:18:04.980 "subsystems": [ 00:18:04.980 { 00:18:04.980 "subsystem": "keyring", 00:18:04.980 "config": [] 00:18:04.980 }, 00:18:04.980 { 00:18:04.980 "subsystem": "iobuf", 00:18:04.980 "config": [ 00:18:04.980 { 00:18:04.980 "method": "iobuf_set_options", 00:18:04.980 "params": { 00:18:04.980 "small_pool_count": 8192, 00:18:04.980 "large_pool_count": 1024, 00:18:04.980 "small_bufsize": 8192, 00:18:04.980 "large_bufsize": 135168 00:18:04.980 } 00:18:04.980 } 00:18:04.980 ] 00:18:04.980 }, 00:18:04.980 { 00:18:04.980 "subsystem": "sock", 00:18:04.980 "config": [ 00:18:04.980 { 00:18:04.980 "method": "sock_set_default_impl", 00:18:04.980 "params": { 00:18:04.981 "impl_name": "posix" 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "sock_impl_set_options", 00:18:04.981 "params": { 00:18:04.981 "impl_name": "ssl", 00:18:04.981 "recv_buf_size": 4096, 00:18:04.981 "send_buf_size": 4096, 00:18:04.981 "enable_recv_pipe": true, 00:18:04.981 "enable_quickack": false, 00:18:04.981 "enable_placement_id": 0, 00:18:04.981 "enable_zerocopy_send_server": true, 00:18:04.981 "enable_zerocopy_send_client": false, 00:18:04.981 "zerocopy_threshold": 0, 00:18:04.981 "tls_version": 0, 00:18:04.981 "enable_ktls": false 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "sock_impl_set_options", 00:18:04.981 "params": { 00:18:04.981 "impl_name": "posix", 00:18:04.981 "recv_buf_size": 2097152, 00:18:04.981 "send_buf_size": 2097152, 00:18:04.981 "enable_recv_pipe": true, 00:18:04.981 "enable_quickack": false, 00:18:04.981 "enable_placement_id": 0, 00:18:04.981 "enable_zerocopy_send_server": true, 00:18:04.981 "enable_zerocopy_send_client": false, 00:18:04.981 "zerocopy_threshold": 0, 00:18:04.981 "tls_version": 0, 00:18:04.981 "enable_ktls": false 00:18:04.981 } 00:18:04.981 } 00:18:04.981 ] 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "subsystem": "vmd", 00:18:04.981 "config": [] 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "subsystem": "accel", 00:18:04.981 "config": [ 00:18:04.981 { 00:18:04.981 "method": "accel_set_options", 00:18:04.981 "params": { 00:18:04.981 "small_cache_size": 128, 00:18:04.981 "large_cache_size": 16, 00:18:04.981 "task_count": 2048, 00:18:04.981 "sequence_count": 2048, 00:18:04.981 "buf_count": 2048 00:18:04.981 } 00:18:04.981 } 00:18:04.981 ] 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "subsystem": "bdev", 00:18:04.981 "config": [ 00:18:04.981 { 00:18:04.981 "method": "bdev_set_options", 00:18:04.981 "params": { 00:18:04.981 "bdev_io_pool_size": 65535, 00:18:04.981 "bdev_io_cache_size": 256, 00:18:04.981 "bdev_auto_examine": true, 00:18:04.981 "iobuf_small_cache_size": 128, 00:18:04.981 "iobuf_large_cache_size": 16 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "bdev_raid_set_options", 00:18:04.981 "params": { 00:18:04.981 "process_window_size_kb": 1024, 00:18:04.981 "process_max_bandwidth_mb_sec": 0 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "bdev_iscsi_set_options", 00:18:04.981 "params": { 00:18:04.981 "timeout_sec": 30 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "bdev_nvme_set_options", 00:18:04.981 "params": { 00:18:04.981 "action_on_timeout": "none", 00:18:04.981 "timeout_us": 0, 00:18:04.981 "timeout_admin_us": 0, 00:18:04.981 "keep_alive_timeout_ms": 10000, 00:18:04.981 "arbitration_burst": 0, 00:18:04.981 "low_priority_weight": 0, 00:18:04.981 "medium_priority_weight": 0, 00:18:04.981 "high_priority_weight": 0, 00:18:04.981 "nvme_adminq_poll_period_us": 10000, 00:18:04.981 "nvme_ioq_poll_period_us": 0, 00:18:04.981 "io_queue_requests": 0, 00:18:04.981 "delay_cmd_submit": true, 00:18:04.981 "transport_retry_count": 4, 00:18:04.981 "bdev_retry_count": 3, 00:18:04.981 "transport_ack_timeout": 0, 00:18:04.981 "ctrlr_loss_timeout_sec": 0, 00:18:04.981 "reconnect_delay_sec": 0, 00:18:04.981 "fast_io_fail_timeout_sec": 0, 00:18:04.981 "disable_auto_failback": false, 00:18:04.981 "generate_uuids": false, 00:18:04.981 "transport_tos": 0, 00:18:04.981 "nvme_error_stat": false, 00:18:04.981 "rdma_srq_size": 0, 00:18:04.981 "io_path_stat": false, 00:18:04.981 "allow_accel_sequence": false, 00:18:04.981 "rdma_max_cq_size": 0, 00:18:04.981 "rdma_cm_event_timeout_ms": 0, 00:18:04.981 "dhchap_digests": [ 00:18:04.981 "sha256", 00:18:04.981 "sha384", 00:18:04.981 "sha512" 00:18:04.981 ], 00:18:04.981 "dhchap_dhgroups": [ 00:18:04.981 "null", 00:18:04.981 "ffdhe2048", 00:18:04.981 "ffdhe3072", 00:18:04.981 "ffdhe4096", 00:18:04.981 "ffdhe6144", 00:18:04.981 "ffdhe8192" 00:18:04.981 ] 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "bdev_nvme_set_hotplug", 00:18:04.981 "params": { 00:18:04.981 "period_us": 100000, 00:18:04.981 "enable": false 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "bdev_malloc_create", 00:18:04.981 "params": { 00:18:04.981 "name": "malloc0", 00:18:04.981 "num_blocks": 8192, 00:18:04.981 "block_size": 4096, 00:18:04.981 "physical_block_size": 4096, 00:18:04.981 "uuid": "e4da1035-666f-4e21-b6de-0ae87a7e0767", 00:18:04.981 "optimal_io_boundary": 0, 00:18:04.981 "md_size": 0, 00:18:04.981 "dif_type": 0, 00:18:04.981 "dif_is_head_of_md": false, 00:18:04.981 "dif_pi_format": 0 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "bdev_wait_for_examine" 00:18:04.981 } 00:18:04.981 ] 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "subsystem": "nbd", 00:18:04.981 "config": [] 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "subsystem": "scheduler", 00:18:04.981 "config": [ 00:18:04.981 { 00:18:04.981 "method": "framework_set_scheduler", 00:18:04.981 "params": { 00:18:04.981 "name": "static" 00:18:04.981 } 00:18:04.981 } 00:18:04.981 ] 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "subsystem": "nvmf", 00:18:04.981 "config": [ 00:18:04.981 { 00:18:04.981 "method": "nvmf_set_config", 00:18:04.981 "params": { 00:18:04.981 "discovery_filter": "match_any", 00:18:04.981 "admin_cmd_passthru": { 00:18:04.981 "identify_ctrlr": false 00:18:04.981 } 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "nvmf_set_max_subsystems", 00:18:04.981 "params": { 00:18:04.981 "max_subsystems": 1024 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "nvmf_set_crdt", 00:18:04.981 "params": { 00:18:04.981 "crdt1": 0, 00:18:04.981 "crdt2": 0, 00:18:04.981 "crdt3": 0 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "nvmf_create_transport", 00:18:04.981 "params": { 00:18:04.981 "trtype": "TCP", 00:18:04.981 "max_queue_depth": 128, 00:18:04.981 "max_io_qpairs_per_ctrlr": 127, 00:18:04.981 "in_capsule_data_size": 4096, 00:18:04.981 "max_io_size": 131072, 00:18:04.981 "io_unit_size": 131072, 00:18:04.981 "max_aq_depth": 128, 00:18:04.981 "num_shared_buffers": 511, 00:18:04.981 "buf_cache_size": 4294967295, 00:18:04.981 "dif_insert_or_strip": false, 00:18:04.981 "zcopy": false, 00:18:04.981 "c2h_success": false, 00:18:04.981 "sock_priority": 0, 00:18:04.981 "abort_timeout_sec": 1, 00:18:04.981 "ack_timeout": 0, 00:18:04.981 "data_wr_pool_size": 0 00:18:04.981 } 00:18:04.981 }, 00:18:04.981 { 00:18:04.981 "method": "nvmf_create_subsystem", 00:18:04.981 "params": { 00:18:04.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.981 "allow_any_host": false, 00:18:04.981 "serial_number": "SPDK00000000000001", 00:18:04.981 "model_number": "SPDK bdev Controller", 00:18:04.981 "max_namespaces": 10, 00:18:04.981 "min_cntlid": 1, 00:18:04.981 "max_cntlid": 65519, 00:18:04.981 "ana_reporting": false 00:18:04.982 } 00:18:04.982 }, 00:18:04.982 { 00:18:04.982 "method": "nvmf_subsystem_add_host", 00:18:04.982 "params": { 00:18:04.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.982 "host": "nqn.2016-06.io.spdk:host1", 00:18:04.982 "psk": "/tmp/tmp.gZbaxJR2Za" 00:18:04.982 } 00:18:04.982 }, 00:18:04.982 { 00:18:04.982 "method": "nvmf_subsystem_add_ns", 00:18:04.982 "params": { 00:18:04.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.982 "namespace": { 00:18:04.982 "nsid": 1, 00:18:04.982 "bdev_name": "malloc0", 00:18:04.982 "nguid": "E4DA1035666F4E21B6DE0AE87A7E0767", 00:18:04.982 "uuid": "e4da1035-666f-4e21-b6de-0ae87a7e0767", 00:18:04.982 "no_auto_visible": false 00:18:04.982 } 00:18:04.982 } 00:18:04.982 }, 00:18:04.982 { 00:18:04.982 "method": "nvmf_subsystem_add_listener", 00:18:04.982 "params": { 00:18:04.982 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:04.982 "listen_address": { 00:18:04.982 "trtype": "TCP", 00:18:04.982 "adrfam": "IPv4", 00:18:04.982 "traddr": "10.0.0.2", 00:18:04.982 "trsvcid": "4420" 00:18:04.982 }, 00:18:04.982 "secure_channel": true 00:18:04.982 } 00:18:04.982 } 00:18:04.982 ] 00:18:04.982 } 00:18:04.982 ] 00:18:04.982 }' 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2897590 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2897590 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2897590 ']' 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:04.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:04.982 12:18:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:04.982 [2024-07-26 12:18:58.046512] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:04.982 [2024-07-26 12:18:58.046596] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:04.982 EAL: No free 2048 kB hugepages reported on node 1 00:18:04.982 [2024-07-26 12:18:58.109730] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:04.982 [2024-07-26 12:18:58.217343] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:04.982 [2024-07-26 12:18:58.217404] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:04.982 [2024-07-26 12:18:58.217417] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:04.982 [2024-07-26 12:18:58.217428] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:04.982 [2024-07-26 12:18:58.217438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:04.982 [2024-07-26 12:18:58.217517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:05.240 [2024-07-26 12:18:58.452763] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:05.241 [2024-07-26 12:18:58.477739] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:05.241 [2024-07-26 12:18:58.493800] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:05.241 [2024-07-26 12:18:58.494112] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:05.806 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:05.806 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:05.806 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:05.806 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:05.806 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.064 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:06.065 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=2897671 00:18:06.065 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 2897671 /var/tmp/bdevperf.sock 00:18:06.065 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2897671 ']' 00:18:06.065 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:18:06.065 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:06.065 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:18:06.065 "subsystems": [ 00:18:06.065 { 00:18:06.065 "subsystem": "keyring", 00:18:06.065 "config": [] 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "subsystem": "iobuf", 00:18:06.065 "config": [ 00:18:06.065 { 00:18:06.065 "method": "iobuf_set_options", 00:18:06.065 "params": { 00:18:06.065 "small_pool_count": 8192, 00:18:06.065 "large_pool_count": 1024, 00:18:06.065 "small_bufsize": 8192, 00:18:06.065 "large_bufsize": 135168 00:18:06.065 } 00:18:06.065 } 00:18:06.065 ] 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "subsystem": "sock", 00:18:06.065 "config": [ 00:18:06.065 { 00:18:06.065 "method": "sock_set_default_impl", 00:18:06.065 "params": { 00:18:06.065 "impl_name": "posix" 00:18:06.065 } 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "method": "sock_impl_set_options", 00:18:06.065 "params": { 00:18:06.065 "impl_name": "ssl", 00:18:06.065 "recv_buf_size": 4096, 00:18:06.065 "send_buf_size": 4096, 00:18:06.065 "enable_recv_pipe": true, 00:18:06.065 "enable_quickack": false, 00:18:06.065 "enable_placement_id": 0, 00:18:06.065 "enable_zerocopy_send_server": true, 00:18:06.065 "enable_zerocopy_send_client": false, 00:18:06.065 "zerocopy_threshold": 0, 00:18:06.065 "tls_version": 0, 00:18:06.065 "enable_ktls": false 00:18:06.065 } 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "method": "sock_impl_set_options", 00:18:06.065 "params": { 00:18:06.065 "impl_name": "posix", 00:18:06.065 "recv_buf_size": 2097152, 00:18:06.065 "send_buf_size": 2097152, 00:18:06.065 "enable_recv_pipe": true, 00:18:06.065 "enable_quickack": false, 00:18:06.065 "enable_placement_id": 0, 00:18:06.065 "enable_zerocopy_send_server": true, 00:18:06.065 "enable_zerocopy_send_client": false, 00:18:06.065 "zerocopy_threshold": 0, 00:18:06.065 "tls_version": 0, 00:18:06.065 "enable_ktls": false 00:18:06.065 } 00:18:06.065 } 00:18:06.065 ] 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "subsystem": "vmd", 00:18:06.065 "config": [] 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "subsystem": "accel", 00:18:06.065 "config": [ 00:18:06.065 { 00:18:06.065 "method": "accel_set_options", 00:18:06.065 "params": { 00:18:06.065 "small_cache_size": 128, 00:18:06.065 "large_cache_size": 16, 00:18:06.065 "task_count": 2048, 00:18:06.065 "sequence_count": 2048, 00:18:06.065 "buf_count": 2048 00:18:06.065 } 00:18:06.065 } 00:18:06.065 ] 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "subsystem": "bdev", 00:18:06.065 "config": [ 00:18:06.065 { 00:18:06.065 "method": "bdev_set_options", 00:18:06.065 "params": { 00:18:06.065 "bdev_io_pool_size": 65535, 00:18:06.065 "bdev_io_cache_size": 256, 00:18:06.065 "bdev_auto_examine": true, 00:18:06.065 "iobuf_small_cache_size": 128, 00:18:06.065 "iobuf_large_cache_size": 16 00:18:06.065 } 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "method": "bdev_raid_set_options", 00:18:06.065 "params": { 00:18:06.065 "process_window_size_kb": 1024, 00:18:06.065 "process_max_bandwidth_mb_sec": 0 00:18:06.065 } 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "method": "bdev_iscsi_set_options", 00:18:06.065 "params": { 00:18:06.065 "timeout_sec": 30 00:18:06.065 } 00:18:06.065 }, 00:18:06.065 { 00:18:06.065 "method": "bdev_nvme_set_options", 00:18:06.065 "params": { 00:18:06.065 "action_on_timeout": "none", 00:18:06.065 "timeout_us": 0, 00:18:06.065 "timeout_admin_us": 0, 00:18:06.065 "keep_alive_timeout_ms": 10000, 00:18:06.065 "arbitration_burst": 0, 00:18:06.065 "low_priority_weight": 0, 00:18:06.065 "medium_priority_weight": 0, 00:18:06.065 "high_priority_weight": 0, 00:18:06.065 "nvme_adminq_poll_period_us": 10000, 00:18:06.065 "nvme_ioq_poll_period_us": 0, 00:18:06.065 "io_queue_requests": 512, 00:18:06.065 "delay_cmd_submit": true, 00:18:06.065 "transport_retry_count": 4, 00:18:06.065 "bdev_retry_count": 3, 00:18:06.065 "transport_ack_timeout": 0, 00:18:06.066 "ctrlr_loss_timeout_sec": 0, 00:18:06.066 "reconnect_delay_sec": 0, 00:18:06.066 "fast_io_fail_timeout_sec": 0, 00:18:06.066 "disable_auto_failback": false, 00:18:06.066 "generate_uuids": false, 00:18:06.066 "transport_tos": 0, 00:18:06.066 "nvme_error_stat": false, 00:18:06.066 "rdma_srq_size": 0, 00:18:06.066 "io_path_stat": false, 00:18:06.066 "allow_accel_sequence": false, 00:18:06.066 "rdma_max_cq_size": 0, 00:18:06.066 "rdma_cm_event_timeout_ms": 0, 00:18:06.066 "dhchap_digests": [ 00:18:06.066 "sha256", 00:18:06.066 "sha384", 00:18:06.066 "sha512" 00:18:06.066 ], 00:18:06.066 "dhchap_dhgroups": [ 00:18:06.066 "null", 00:18:06.066 "ffdhe2048", 00:18:06.066 "ffdhe3072", 00:18:06.066 "ffdhe4096", 00:18:06.066 "ffdhe6144", 00:18:06.066 "ffdhe8192" 00:18:06.066 ] 00:18:06.066 } 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "method": "bdev_nvme_attach_controller", 00:18:06.066 "params": { 00:18:06.066 "name": "TLSTEST", 00:18:06.066 "trtype": "TCP", 00:18:06.066 "adrfam": "IPv4", 00:18:06.066 "traddr": "10.0.0.2", 00:18:06.066 "trsvcid": "4420", 00:18:06.066 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:06.066 "prchk_reftag": false, 00:18:06.066 "prchk_guard": false, 00:18:06.066 "ctrlr_loss_timeout_sec": 0, 00:18:06.066 "reconnect_delay_sec": 0, 00:18:06.066 "fast_io_fail_timeout_sec": 0, 00:18:06.066 "psk": "/tmp/tmp.gZbaxJR2Za", 00:18:06.066 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:06.066 "hdgst": false, 00:18:06.066 "ddgst": false 00:18:06.066 } 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "method": "bdev_nvme_set_hotplug", 00:18:06.066 "params": { 00:18:06.066 "period_us": 100000, 00:18:06.066 "enable": false 00:18:06.066 } 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "method": "bdev_wait_for_examine" 00:18:06.066 } 00:18:06.066 ] 00:18:06.066 }, 00:18:06.066 { 00:18:06.066 "subsystem": "nbd", 00:18:06.066 "config": [] 00:18:06.066 } 00:18:06.066 ] 00:18:06.066 }' 00:18:06.066 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:06.066 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:06.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:06.066 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:06.066 12:18:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:06.066 [2024-07-26 12:18:59.112455] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:06.066 [2024-07-26 12:18:59.112532] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2897671 ] 00:18:06.066 EAL: No free 2048 kB hugepages reported on node 1 00:18:06.066 [2024-07-26 12:18:59.171232] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:06.066 [2024-07-26 12:18:59.275483] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:06.324 [2024-07-26 12:18:59.434785] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:06.324 [2024-07-26 12:18:59.434907] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:06.889 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:06.889 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:06.889 12:19:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:07.146 Running I/O for 10 seconds... 00:18:17.119 00:18:17.119 Latency(us) 00:18:17.119 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.119 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:17.119 Verification LBA range: start 0x0 length 0x2000 00:18:17.119 TLSTESTn1 : 10.04 2941.54 11.49 0.00 0.00 43405.31 8738.13 65244.73 00:18:17.119 =================================================================================================================== 00:18:17.119 Total : 2941.54 11.49 0.00 0.00 43405.31 8738.13 65244.73 00:18:17.119 0 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 2897671 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2897671 ']' 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2897671 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897671 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897671' 00:18:17.120 killing process with pid 2897671 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2897671 00:18:17.120 Received shutdown signal, test time was about 10.000000 seconds 00:18:17.120 00:18:17.120 Latency(us) 00:18:17.120 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:17.120 =================================================================================================================== 00:18:17.120 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:17.120 [2024-07-26 12:19:10.282604] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:17.120 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2897671 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 2897590 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2897590 ']' 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2897590 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2897590 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2897590' 00:18:17.379 killing process with pid 2897590 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2897590 00:18:17.379 [2024-07-26 12:19:10.569286] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:17.379 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2897590 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2899082 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:17.636 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2899082 00:18:17.637 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2899082 ']' 00:18:17.637 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:17.637 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:17.637 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:17.637 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:17.637 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:17.637 12:19:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:17.896 [2024-07-26 12:19:10.921030] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:17.896 [2024-07-26 12:19:10.921139] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:17.896 EAL: No free 2048 kB hugepages reported on node 1 00:18:17.896 [2024-07-26 12:19:10.982663] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:17.896 [2024-07-26 12:19:11.088298] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:17.896 [2024-07-26 12:19:11.088380] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:17.896 [2024-07-26 12:19:11.088394] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:17.896 [2024-07-26 12:19:11.088421] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:17.896 [2024-07-26 12:19:11.088440] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:17.896 [2024-07-26 12:19:11.088467] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.gZbaxJR2Za 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.gZbaxJR2Za 00:18:18.154 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:18.411 [2024-07-26 12:19:11.454569] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:18.411 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:18.669 12:19:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:18.927 [2024-07-26 12:19:12.048161] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:18.927 [2024-07-26 12:19:12.048422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:18.927 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:19.185 malloc0 00:18:19.185 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:19.441 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.gZbaxJR2Za 00:18:19.698 [2024-07-26 12:19:12.834154] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=2899367 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 2899367 /var/tmp/bdevperf.sock 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2899367 ']' 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:19.698 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:19.698 12:19:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:19.698 [2024-07-26 12:19:12.897631] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:19.698 [2024-07-26 12:19:12.897717] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899367 ] 00:18:19.698 EAL: No free 2048 kB hugepages reported on node 1 00:18:19.955 [2024-07-26 12:19:12.955226] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:19.955 [2024-07-26 12:19:13.058832] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:19.955 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:19.955 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:19.955 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZbaxJR2Za 00:18:20.214 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:20.475 [2024-07-26 12:19:13.639748] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:20.475 nvme0n1 00:18:20.735 12:19:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:20.735 Running I/O for 1 seconds... 00:18:21.670 00:18:21.670 Latency(us) 00:18:21.670 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.670 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:21.670 Verification LBA range: start 0x0 length 0x2000 00:18:21.670 nvme0n1 : 1.04 2765.43 10.80 0.00 0.00 45403.85 7912.87 69905.07 00:18:21.670 =================================================================================================================== 00:18:21.670 Total : 2765.43 10.80 0.00 0.00 45403.85 7912.87 69905.07 00:18:21.670 0 00:18:21.670 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 2899367 00:18:21.670 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2899367 ']' 00:18:21.670 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2899367 00:18:21.670 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:21.670 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:21.670 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899367 00:18:21.929 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:21.929 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:21.929 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899367' 00:18:21.929 killing process with pid 2899367 00:18:21.929 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2899367 00:18:21.929 Received shutdown signal, test time was about 1.000000 seconds 00:18:21.929 00:18:21.929 Latency(us) 00:18:21.929 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:21.929 =================================================================================================================== 00:18:21.929 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:21.929 12:19:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2899367 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 2899082 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2899082 ']' 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2899082 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899082 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899082' 00:18:22.189 killing process with pid 2899082 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2899082 00:18:22.189 [2024-07-26 12:19:15.241273] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:22.189 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2899082 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2899643 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2899643 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2899643 ']' 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:22.448 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.448 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.448 [2024-07-26 12:19:15.598018] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:22.448 [2024-07-26 12:19:15.598136] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:22.448 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.448 [2024-07-26 12:19:15.662006] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.708 [2024-07-26 12:19:15.777556] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:22.708 [2024-07-26 12:19:15.777636] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:22.708 [2024-07-26 12:19:15.777663] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:22.708 [2024-07-26 12:19:15.777677] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:22.708 [2024-07-26 12:19:15.777688] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:22.708 [2024-07-26 12:19:15.777728] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:22.708 12:19:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.708 [2024-07-26 12:19:15.931290] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:22.708 malloc0 00:18:22.967 [2024-07-26 12:19:15.963258] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:22.967 [2024-07-26 12:19:15.978241] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=2899677 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 2899677 /var/tmp/bdevperf.sock 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2899677 ']' 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:22.967 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:18:22.967 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:22.967 [2024-07-26 12:19:16.047382] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:22.967 [2024-07-26 12:19:16.047466] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2899677 ] 00:18:22.967 EAL: No free 2048 kB hugepages reported on node 1 00:18:22.967 [2024-07-26 12:19:16.107847] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:22.967 [2024-07-26 12:19:16.215161] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:23.225 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:23.225 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:23.225 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.gZbaxJR2Za 00:18:23.483 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:18:23.742 [2024-07-26 12:19:16.789716] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:23.742 nvme0n1 00:18:23.742 12:19:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:23.742 Running I/O for 1 seconds... 00:18:25.143 00:18:25.143 Latency(us) 00:18:25.143 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.144 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:25.144 Verification LBA range: start 0x0 length 0x2000 00:18:25.144 nvme0n1 : 1.04 2749.30 10.74 0.00 0.00 45704.27 6553.60 76895.57 00:18:25.144 =================================================================================================================== 00:18:25.144 Total : 2749.30 10.74 0.00 0.00 45704.27 6553.60 76895.57 00:18:25.144 0 00:18:25.144 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:18:25.144 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@561 -- # xtrace_disable 00:18:25.144 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.144 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:18:25.144 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:18:25.144 "subsystems": [ 00:18:25.144 { 00:18:25.144 "subsystem": "keyring", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "keyring_file_add_key", 00:18:25.144 "params": { 00:18:25.144 "name": "key0", 00:18:25.144 "path": "/tmp/tmp.gZbaxJR2Za" 00:18:25.144 } 00:18:25.144 } 00:18:25.144 ] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "iobuf", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "iobuf_set_options", 00:18:25.144 "params": { 00:18:25.144 "small_pool_count": 8192, 00:18:25.144 "large_pool_count": 1024, 00:18:25.144 "small_bufsize": 8192, 00:18:25.144 "large_bufsize": 135168 00:18:25.144 } 00:18:25.144 } 00:18:25.144 ] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "sock", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "sock_set_default_impl", 00:18:25.144 "params": { 00:18:25.144 "impl_name": "posix" 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "sock_impl_set_options", 00:18:25.144 "params": { 00:18:25.144 "impl_name": "ssl", 00:18:25.144 "recv_buf_size": 4096, 00:18:25.144 "send_buf_size": 4096, 00:18:25.144 "enable_recv_pipe": true, 00:18:25.144 "enable_quickack": false, 00:18:25.144 "enable_placement_id": 0, 00:18:25.144 "enable_zerocopy_send_server": true, 00:18:25.144 "enable_zerocopy_send_client": false, 00:18:25.144 "zerocopy_threshold": 0, 00:18:25.144 "tls_version": 0, 00:18:25.144 "enable_ktls": false 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "sock_impl_set_options", 00:18:25.144 "params": { 00:18:25.144 "impl_name": "posix", 00:18:25.144 "recv_buf_size": 2097152, 00:18:25.144 "send_buf_size": 2097152, 00:18:25.144 "enable_recv_pipe": true, 00:18:25.144 "enable_quickack": false, 00:18:25.144 "enable_placement_id": 0, 00:18:25.144 "enable_zerocopy_send_server": true, 00:18:25.144 "enable_zerocopy_send_client": false, 00:18:25.144 "zerocopy_threshold": 0, 00:18:25.144 "tls_version": 0, 00:18:25.144 "enable_ktls": false 00:18:25.144 } 00:18:25.144 } 00:18:25.144 ] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "vmd", 00:18:25.144 "config": [] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "accel", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "accel_set_options", 00:18:25.144 "params": { 00:18:25.144 "small_cache_size": 128, 00:18:25.144 "large_cache_size": 16, 00:18:25.144 "task_count": 2048, 00:18:25.144 "sequence_count": 2048, 00:18:25.144 "buf_count": 2048 00:18:25.144 } 00:18:25.144 } 00:18:25.144 ] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "bdev", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "bdev_set_options", 00:18:25.144 "params": { 00:18:25.144 "bdev_io_pool_size": 65535, 00:18:25.144 "bdev_io_cache_size": 256, 00:18:25.144 "bdev_auto_examine": true, 00:18:25.144 "iobuf_small_cache_size": 128, 00:18:25.144 "iobuf_large_cache_size": 16 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "bdev_raid_set_options", 00:18:25.144 "params": { 00:18:25.144 "process_window_size_kb": 1024, 00:18:25.144 "process_max_bandwidth_mb_sec": 0 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "bdev_iscsi_set_options", 00:18:25.144 "params": { 00:18:25.144 "timeout_sec": 30 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "bdev_nvme_set_options", 00:18:25.144 "params": { 00:18:25.144 "action_on_timeout": "none", 00:18:25.144 "timeout_us": 0, 00:18:25.144 "timeout_admin_us": 0, 00:18:25.144 "keep_alive_timeout_ms": 10000, 00:18:25.144 "arbitration_burst": 0, 00:18:25.144 "low_priority_weight": 0, 00:18:25.144 "medium_priority_weight": 0, 00:18:25.144 "high_priority_weight": 0, 00:18:25.144 "nvme_adminq_poll_period_us": 10000, 00:18:25.144 "nvme_ioq_poll_period_us": 0, 00:18:25.144 "io_queue_requests": 0, 00:18:25.144 "delay_cmd_submit": true, 00:18:25.144 "transport_retry_count": 4, 00:18:25.144 "bdev_retry_count": 3, 00:18:25.144 "transport_ack_timeout": 0, 00:18:25.144 "ctrlr_loss_timeout_sec": 0, 00:18:25.144 "reconnect_delay_sec": 0, 00:18:25.144 "fast_io_fail_timeout_sec": 0, 00:18:25.144 "disable_auto_failback": false, 00:18:25.144 "generate_uuids": false, 00:18:25.144 "transport_tos": 0, 00:18:25.144 "nvme_error_stat": false, 00:18:25.144 "rdma_srq_size": 0, 00:18:25.144 "io_path_stat": false, 00:18:25.144 "allow_accel_sequence": false, 00:18:25.144 "rdma_max_cq_size": 0, 00:18:25.144 "rdma_cm_event_timeout_ms": 0, 00:18:25.144 "dhchap_digests": [ 00:18:25.144 "sha256", 00:18:25.144 "sha384", 00:18:25.144 "sha512" 00:18:25.144 ], 00:18:25.144 "dhchap_dhgroups": [ 00:18:25.144 "null", 00:18:25.144 "ffdhe2048", 00:18:25.144 "ffdhe3072", 00:18:25.144 "ffdhe4096", 00:18:25.144 "ffdhe6144", 00:18:25.144 "ffdhe8192" 00:18:25.144 ] 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "bdev_nvme_set_hotplug", 00:18:25.144 "params": { 00:18:25.144 "period_us": 100000, 00:18:25.144 "enable": false 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "bdev_malloc_create", 00:18:25.144 "params": { 00:18:25.144 "name": "malloc0", 00:18:25.144 "num_blocks": 8192, 00:18:25.144 "block_size": 4096, 00:18:25.144 "physical_block_size": 4096, 00:18:25.144 "uuid": "ee851cfa-d5d8-43d3-b6c0-f20436695eb4", 00:18:25.144 "optimal_io_boundary": 0, 00:18:25.144 "md_size": 0, 00:18:25.144 "dif_type": 0, 00:18:25.144 "dif_is_head_of_md": false, 00:18:25.144 "dif_pi_format": 0 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "bdev_wait_for_examine" 00:18:25.144 } 00:18:25.144 ] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "nbd", 00:18:25.144 "config": [] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "scheduler", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "framework_set_scheduler", 00:18:25.144 "params": { 00:18:25.144 "name": "static" 00:18:25.144 } 00:18:25.144 } 00:18:25.144 ] 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "subsystem": "nvmf", 00:18:25.144 "config": [ 00:18:25.144 { 00:18:25.144 "method": "nvmf_set_config", 00:18:25.144 "params": { 00:18:25.144 "discovery_filter": "match_any", 00:18:25.144 "admin_cmd_passthru": { 00:18:25.144 "identify_ctrlr": false 00:18:25.144 } 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "nvmf_set_max_subsystems", 00:18:25.144 "params": { 00:18:25.144 "max_subsystems": 1024 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "nvmf_set_crdt", 00:18:25.144 "params": { 00:18:25.144 "crdt1": 0, 00:18:25.144 "crdt2": 0, 00:18:25.144 "crdt3": 0 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "nvmf_create_transport", 00:18:25.144 "params": { 00:18:25.144 "trtype": "TCP", 00:18:25.144 "max_queue_depth": 128, 00:18:25.144 "max_io_qpairs_per_ctrlr": 127, 00:18:25.144 "in_capsule_data_size": 4096, 00:18:25.144 "max_io_size": 131072, 00:18:25.144 "io_unit_size": 131072, 00:18:25.144 "max_aq_depth": 128, 00:18:25.144 "num_shared_buffers": 511, 00:18:25.144 "buf_cache_size": 4294967295, 00:18:25.144 "dif_insert_or_strip": false, 00:18:25.144 "zcopy": false, 00:18:25.144 "c2h_success": false, 00:18:25.144 "sock_priority": 0, 00:18:25.144 "abort_timeout_sec": 1, 00:18:25.144 "ack_timeout": 0, 00:18:25.144 "data_wr_pool_size": 0 00:18:25.144 } 00:18:25.144 }, 00:18:25.144 { 00:18:25.144 "method": "nvmf_create_subsystem", 00:18:25.145 "params": { 00:18:25.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.145 "allow_any_host": false, 00:18:25.145 "serial_number": "00000000000000000000", 00:18:25.145 "model_number": "SPDK bdev Controller", 00:18:25.145 "max_namespaces": 32, 00:18:25.145 "min_cntlid": 1, 00:18:25.145 "max_cntlid": 65519, 00:18:25.145 "ana_reporting": false 00:18:25.145 } 00:18:25.145 }, 00:18:25.145 { 00:18:25.145 "method": "nvmf_subsystem_add_host", 00:18:25.145 "params": { 00:18:25.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.145 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.145 "psk": "key0" 00:18:25.145 } 00:18:25.145 }, 00:18:25.145 { 00:18:25.145 "method": "nvmf_subsystem_add_ns", 00:18:25.145 "params": { 00:18:25.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.145 "namespace": { 00:18:25.145 "nsid": 1, 00:18:25.145 "bdev_name": "malloc0", 00:18:25.145 "nguid": "EE851CFAD5D843D3B6C0F20436695EB4", 00:18:25.145 "uuid": "ee851cfa-d5d8-43d3-b6c0-f20436695eb4", 00:18:25.145 "no_auto_visible": false 00:18:25.145 } 00:18:25.145 } 00:18:25.145 }, 00:18:25.145 { 00:18:25.145 "method": "nvmf_subsystem_add_listener", 00:18:25.145 "params": { 00:18:25.145 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.145 "listen_address": { 00:18:25.145 "trtype": "TCP", 00:18:25.145 "adrfam": "IPv4", 00:18:25.145 "traddr": "10.0.0.2", 00:18:25.145 "trsvcid": "4420" 00:18:25.145 }, 00:18:25.145 "secure_channel": false, 00:18:25.145 "sock_impl": "ssl" 00:18:25.145 } 00:18:25.145 } 00:18:25.145 ] 00:18:25.145 } 00:18:25.145 ] 00:18:25.145 }' 00:18:25.145 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:18:25.405 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:18:25.405 "subsystems": [ 00:18:25.405 { 00:18:25.405 "subsystem": "keyring", 00:18:25.405 "config": [ 00:18:25.405 { 00:18:25.405 "method": "keyring_file_add_key", 00:18:25.405 "params": { 00:18:25.405 "name": "key0", 00:18:25.405 "path": "/tmp/tmp.gZbaxJR2Za" 00:18:25.405 } 00:18:25.405 } 00:18:25.405 ] 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "subsystem": "iobuf", 00:18:25.405 "config": [ 00:18:25.405 { 00:18:25.405 "method": "iobuf_set_options", 00:18:25.405 "params": { 00:18:25.405 "small_pool_count": 8192, 00:18:25.405 "large_pool_count": 1024, 00:18:25.405 "small_bufsize": 8192, 00:18:25.405 "large_bufsize": 135168 00:18:25.405 } 00:18:25.405 } 00:18:25.405 ] 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "subsystem": "sock", 00:18:25.405 "config": [ 00:18:25.405 { 00:18:25.405 "method": "sock_set_default_impl", 00:18:25.405 "params": { 00:18:25.405 "impl_name": "posix" 00:18:25.405 } 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "method": "sock_impl_set_options", 00:18:25.405 "params": { 00:18:25.405 "impl_name": "ssl", 00:18:25.405 "recv_buf_size": 4096, 00:18:25.405 "send_buf_size": 4096, 00:18:25.405 "enable_recv_pipe": true, 00:18:25.405 "enable_quickack": false, 00:18:25.405 "enable_placement_id": 0, 00:18:25.405 "enable_zerocopy_send_server": true, 00:18:25.405 "enable_zerocopy_send_client": false, 00:18:25.405 "zerocopy_threshold": 0, 00:18:25.405 "tls_version": 0, 00:18:25.405 "enable_ktls": false 00:18:25.405 } 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "method": "sock_impl_set_options", 00:18:25.405 "params": { 00:18:25.405 "impl_name": "posix", 00:18:25.405 "recv_buf_size": 2097152, 00:18:25.405 "send_buf_size": 2097152, 00:18:25.405 "enable_recv_pipe": true, 00:18:25.405 "enable_quickack": false, 00:18:25.405 "enable_placement_id": 0, 00:18:25.405 "enable_zerocopy_send_server": true, 00:18:25.405 "enable_zerocopy_send_client": false, 00:18:25.405 "zerocopy_threshold": 0, 00:18:25.405 "tls_version": 0, 00:18:25.405 "enable_ktls": false 00:18:25.405 } 00:18:25.405 } 00:18:25.405 ] 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "subsystem": "vmd", 00:18:25.405 "config": [] 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "subsystem": "accel", 00:18:25.405 "config": [ 00:18:25.405 { 00:18:25.405 "method": "accel_set_options", 00:18:25.405 "params": { 00:18:25.405 "small_cache_size": 128, 00:18:25.405 "large_cache_size": 16, 00:18:25.405 "task_count": 2048, 00:18:25.405 "sequence_count": 2048, 00:18:25.405 "buf_count": 2048 00:18:25.405 } 00:18:25.405 } 00:18:25.405 ] 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "subsystem": "bdev", 00:18:25.405 "config": [ 00:18:25.405 { 00:18:25.405 "method": "bdev_set_options", 00:18:25.405 "params": { 00:18:25.405 "bdev_io_pool_size": 65535, 00:18:25.405 "bdev_io_cache_size": 256, 00:18:25.405 "bdev_auto_examine": true, 00:18:25.405 "iobuf_small_cache_size": 128, 00:18:25.405 "iobuf_large_cache_size": 16 00:18:25.405 } 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "method": "bdev_raid_set_options", 00:18:25.405 "params": { 00:18:25.405 "process_window_size_kb": 1024, 00:18:25.405 "process_max_bandwidth_mb_sec": 0 00:18:25.405 } 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "method": "bdev_iscsi_set_options", 00:18:25.405 "params": { 00:18:25.405 "timeout_sec": 30 00:18:25.405 } 00:18:25.405 }, 00:18:25.405 { 00:18:25.405 "method": "bdev_nvme_set_options", 00:18:25.405 "params": { 00:18:25.405 "action_on_timeout": "none", 00:18:25.405 "timeout_us": 0, 00:18:25.405 "timeout_admin_us": 0, 00:18:25.405 "keep_alive_timeout_ms": 10000, 00:18:25.405 "arbitration_burst": 0, 00:18:25.405 "low_priority_weight": 0, 00:18:25.405 "medium_priority_weight": 0, 00:18:25.405 "high_priority_weight": 0, 00:18:25.405 "nvme_adminq_poll_period_us": 10000, 00:18:25.405 "nvme_ioq_poll_period_us": 0, 00:18:25.405 "io_queue_requests": 512, 00:18:25.405 "delay_cmd_submit": true, 00:18:25.405 "transport_retry_count": 4, 00:18:25.405 "bdev_retry_count": 3, 00:18:25.405 "transport_ack_timeout": 0, 00:18:25.405 "ctrlr_loss_timeout_sec": 0, 00:18:25.405 "reconnect_delay_sec": 0, 00:18:25.405 "fast_io_fail_timeout_sec": 0, 00:18:25.405 "disable_auto_failback": false, 00:18:25.405 "generate_uuids": false, 00:18:25.405 "transport_tos": 0, 00:18:25.405 "nvme_error_stat": false, 00:18:25.405 "rdma_srq_size": 0, 00:18:25.405 "io_path_stat": false, 00:18:25.405 "allow_accel_sequence": false, 00:18:25.405 "rdma_max_cq_size": 0, 00:18:25.405 "rdma_cm_event_timeout_ms": 0, 00:18:25.405 "dhchap_digests": [ 00:18:25.405 "sha256", 00:18:25.405 "sha384", 00:18:25.405 "sha512" 00:18:25.405 ], 00:18:25.405 "dhchap_dhgroups": [ 00:18:25.405 "null", 00:18:25.405 "ffdhe2048", 00:18:25.406 "ffdhe3072", 00:18:25.406 "ffdhe4096", 00:18:25.406 "ffdhe6144", 00:18:25.406 "ffdhe8192" 00:18:25.406 ] 00:18:25.406 } 00:18:25.406 }, 00:18:25.406 { 00:18:25.406 "method": "bdev_nvme_attach_controller", 00:18:25.406 "params": { 00:18:25.406 "name": "nvme0", 00:18:25.406 "trtype": "TCP", 00:18:25.406 "adrfam": "IPv4", 00:18:25.406 "traddr": "10.0.0.2", 00:18:25.406 "trsvcid": "4420", 00:18:25.406 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.406 "prchk_reftag": false, 00:18:25.406 "prchk_guard": false, 00:18:25.406 "ctrlr_loss_timeout_sec": 0, 00:18:25.406 "reconnect_delay_sec": 0, 00:18:25.406 "fast_io_fail_timeout_sec": 0, 00:18:25.406 "psk": "key0", 00:18:25.406 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:25.406 "hdgst": false, 00:18:25.406 "ddgst": false 00:18:25.406 } 00:18:25.406 }, 00:18:25.406 { 00:18:25.406 "method": "bdev_nvme_set_hotplug", 00:18:25.406 "params": { 00:18:25.406 "period_us": 100000, 00:18:25.406 "enable": false 00:18:25.406 } 00:18:25.406 }, 00:18:25.406 { 00:18:25.406 "method": "bdev_enable_histogram", 00:18:25.406 "params": { 00:18:25.406 "name": "nvme0n1", 00:18:25.406 "enable": true 00:18:25.406 } 00:18:25.406 }, 00:18:25.406 { 00:18:25.406 "method": "bdev_wait_for_examine" 00:18:25.406 } 00:18:25.406 ] 00:18:25.406 }, 00:18:25.406 { 00:18:25.406 "subsystem": "nbd", 00:18:25.406 "config": [] 00:18:25.406 } 00:18:25.406 ] 00:18:25.406 }' 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 2899677 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2899677 ']' 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2899677 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899677 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899677' 00:18:25.406 killing process with pid 2899677 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2899677 00:18:25.406 Received shutdown signal, test time was about 1.000000 seconds 00:18:25.406 00:18:25.406 Latency(us) 00:18:25.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:25.406 =================================================================================================================== 00:18:25.406 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:25.406 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2899677 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 2899643 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2899643 ']' 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2899643 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2899643 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2899643' 00:18:25.666 killing process with pid 2899643 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2899643 00:18:25.666 12:19:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2899643 00:18:25.925 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:18:25.925 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:25.925 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:18:25.925 "subsystems": [ 00:18:25.925 { 00:18:25.925 "subsystem": "keyring", 00:18:25.925 "config": [ 00:18:25.925 { 00:18:25.925 "method": "keyring_file_add_key", 00:18:25.925 "params": { 00:18:25.925 "name": "key0", 00:18:25.925 "path": "/tmp/tmp.gZbaxJR2Za" 00:18:25.925 } 00:18:25.925 } 00:18:25.925 ] 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "subsystem": "iobuf", 00:18:25.925 "config": [ 00:18:25.925 { 00:18:25.925 "method": "iobuf_set_options", 00:18:25.925 "params": { 00:18:25.925 "small_pool_count": 8192, 00:18:25.925 "large_pool_count": 1024, 00:18:25.925 "small_bufsize": 8192, 00:18:25.925 "large_bufsize": 135168 00:18:25.925 } 00:18:25.925 } 00:18:25.925 ] 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "subsystem": "sock", 00:18:25.925 "config": [ 00:18:25.925 { 00:18:25.925 "method": "sock_set_default_impl", 00:18:25.925 "params": { 00:18:25.925 "impl_name": "posix" 00:18:25.925 } 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "method": "sock_impl_set_options", 00:18:25.925 "params": { 00:18:25.925 "impl_name": "ssl", 00:18:25.925 "recv_buf_size": 4096, 00:18:25.925 "send_buf_size": 4096, 00:18:25.925 "enable_recv_pipe": true, 00:18:25.925 "enable_quickack": false, 00:18:25.925 "enable_placement_id": 0, 00:18:25.925 "enable_zerocopy_send_server": true, 00:18:25.925 "enable_zerocopy_send_client": false, 00:18:25.925 "zerocopy_threshold": 0, 00:18:25.925 "tls_version": 0, 00:18:25.925 "enable_ktls": false 00:18:25.925 } 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "method": "sock_impl_set_options", 00:18:25.925 "params": { 00:18:25.925 "impl_name": "posix", 00:18:25.925 "recv_buf_size": 2097152, 00:18:25.925 "send_buf_size": 2097152, 00:18:25.925 "enable_recv_pipe": true, 00:18:25.925 "enable_quickack": false, 00:18:25.925 "enable_placement_id": 0, 00:18:25.925 "enable_zerocopy_send_server": true, 00:18:25.925 "enable_zerocopy_send_client": false, 00:18:25.925 "zerocopy_threshold": 0, 00:18:25.925 "tls_version": 0, 00:18:25.925 "enable_ktls": false 00:18:25.925 } 00:18:25.925 } 00:18:25.925 ] 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "subsystem": "vmd", 00:18:25.925 "config": [] 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "subsystem": "accel", 00:18:25.925 "config": [ 00:18:25.925 { 00:18:25.925 "method": "accel_set_options", 00:18:25.925 "params": { 00:18:25.925 "small_cache_size": 128, 00:18:25.925 "large_cache_size": 16, 00:18:25.925 "task_count": 2048, 00:18:25.925 "sequence_count": 2048, 00:18:25.925 "buf_count": 2048 00:18:25.925 } 00:18:25.925 } 00:18:25.925 ] 00:18:25.925 }, 00:18:25.925 { 00:18:25.925 "subsystem": "bdev", 00:18:25.926 "config": [ 00:18:25.926 { 00:18:25.926 "method": "bdev_set_options", 00:18:25.926 "params": { 00:18:25.926 "bdev_io_pool_size": 65535, 00:18:25.926 "bdev_io_cache_size": 256, 00:18:25.926 "bdev_auto_examine": true, 00:18:25.926 "iobuf_small_cache_size": 128, 00:18:25.926 "iobuf_large_cache_size": 16 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "bdev_raid_set_options", 00:18:25.926 "params": { 00:18:25.926 "process_window_size_kb": 1024, 00:18:25.926 "process_max_bandwidth_mb_sec": 0 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "bdev_iscsi_set_options", 00:18:25.926 "params": { 00:18:25.926 "timeout_sec": 30 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "bdev_nvme_set_options", 00:18:25.926 "params": { 00:18:25.926 "action_on_timeout": "none", 00:18:25.926 "timeout_us": 0, 00:18:25.926 "timeout_admin_us": 0, 00:18:25.926 "keep_alive_timeout_ms": 10000, 00:18:25.926 "arbitration_burst": 0, 00:18:25.926 "low_priority_weight": 0, 00:18:25.926 "medium_priority_weight": 0, 00:18:25.926 "high_priority_weight": 0, 00:18:25.926 "nvme_adminq_poll_period_us": 10000, 00:18:25.926 "nvme_ioq_poll_period_us": 0, 00:18:25.926 "io_queue_requests": 0, 00:18:25.926 "delay_cmd_submit": true, 00:18:25.926 "transport_retry_count": 4, 00:18:25.926 "bdev_retry_count": 3, 00:18:25.926 "transport_ack_timeout": 0, 00:18:25.926 "ctrlr_loss_timeout_sec": 0, 00:18:25.926 "reconnect_delay_sec": 0, 00:18:25.926 "fast_io_fail_timeout_sec": 0, 00:18:25.926 "disable_auto_failback": false, 00:18:25.926 "generate_uuids": false, 00:18:25.926 "transport_tos": 0, 00:18:25.926 "nvme_error_stat": false, 00:18:25.926 "rdma_srq_size": 0, 00:18:25.926 "io_path_stat": false, 00:18:25.926 "allow_accel_sequence": false, 00:18:25.926 "rdma_max_cq_size": 0, 00:18:25.926 "rdma_cm_event_timeout_ms": 0, 00:18:25.926 "dhchap_digests": [ 00:18:25.926 "sha256", 00:18:25.926 "sha384", 00:18:25.926 "sha512" 00:18:25.926 ], 00:18:25.926 "dhchap_dhgroups": [ 00:18:25.926 "null", 00:18:25.926 "ffdhe2048", 00:18:25.926 "ffdhe3072", 00:18:25.926 "ffdhe4096", 00:18:25.926 "ffdhe6144", 00:18:25.926 "ffdhe8192" 00:18:25.926 ] 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "bdev_nvme_set_hotplug", 00:18:25.926 "params": { 00:18:25.926 "period_us": 100000, 00:18:25.926 "enable": false 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "bdev_malloc_create", 00:18:25.926 "params": { 00:18:25.926 "name": "malloc0", 00:18:25.926 "num_blocks": 8192, 00:18:25.926 "block_size": 4096, 00:18:25.926 "physical_block_size": 4096, 00:18:25.926 "uuid": "ee851cfa-d5d8-43d3-b6c0-f20436695eb4", 00:18:25.926 "optimal_io_boundary": 0, 00:18:25.926 "md_size": 0, 00:18:25.926 "dif_type": 0, 00:18:25.926 "dif_is_head_of_md": false, 00:18:25.926 "dif_pi_format": 0 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "bdev_wait_for_examine" 00:18:25.926 } 00:18:25.926 ] 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "subsystem": "nbd", 00:18:25.926 "config": [] 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "subsystem": "scheduler", 00:18:25.926 "config": [ 00:18:25.926 { 00:18:25.926 "method": "framework_set_scheduler", 00:18:25.926 "params": { 00:18:25.926 "name": "static" 00:18:25.926 } 00:18:25.926 } 00:18:25.926 ] 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "subsystem": "nvmf", 00:18:25.926 "config": [ 00:18:25.926 { 00:18:25.926 "method": "nvmf_set_config", 00:18:25.926 "params": { 00:18:25.926 "discovery_filter": "match_any", 00:18:25.926 "admin_cmd_passthru": { 00:18:25.926 "identify_ctrlr": false 00:18:25.926 } 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_set_max_subsystems", 00:18:25.926 "params": { 00:18:25.926 "max_subsystems": 1024 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_set_crdt", 00:18:25.926 "params": { 00:18:25.926 "crdt1": 0, 00:18:25.926 "crdt2": 0, 00:18:25.926 "crdt3": 0 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_create_transport", 00:18:25.926 "params": { 00:18:25.926 "trtype": "TCP", 00:18:25.926 "max_queue_depth": 128, 00:18:25.926 "max_io_qpairs_per_ctrlr": 127, 00:18:25.926 "in_capsule_data_size": 4096, 00:18:25.926 "max_io_size": 131072, 00:18:25.926 "io_unit_size": 131072, 00:18:25.926 "max_aq_depth": 128, 00:18:25.926 "num_shared_buffers": 511, 00:18:25.926 "buf_cache_size": 4294967295, 00:18:25.926 "dif_insert_or_strip": false, 00:18:25.926 "zcopy": false, 00:18:25.926 "c2h_success": false, 00:18:25.926 "sock_priority": 0, 00:18:25.926 "abort_timeout_sec": 1, 00:18:25.926 "ack_timeout": 0, 00:18:25.926 "data_wr_pool_size": 0 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_create_subsystem", 00:18:25.926 "params": { 00:18:25.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.926 "allow_any_host": false, 00:18:25.926 "serial_number": "00000000000000000000", 00:18:25.926 "model_number": "SPDK bdev Controller", 00:18:25.926 "max_namespaces": 32, 00:18:25.926 "min_cntlid": 1, 00:18:25.926 "max_cntlid": 65519, 00:18:25.926 "ana_reporting": false 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_subsystem_add_host", 00:18:25.926 "params": { 00:18:25.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.926 "host": "nqn.2016-06.io.spdk:host1", 00:18:25.926 "psk": "key0" 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_subsystem_add_ns", 00:18:25.926 "params": { 00:18:25.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.926 "namespace": { 00:18:25.926 "nsid": 1, 00:18:25.926 "bdev_name": "malloc0", 00:18:25.926 "nguid": "EE851CFAD5D843D3B6C0F20436695EB4", 00:18:25.926 "uuid": "ee851cfa-d5d8-43d3-b6c0-f20436695eb4", 00:18:25.926 "no_auto_visible": false 00:18:25.926 } 00:18:25.926 } 00:18:25.926 }, 00:18:25.926 { 00:18:25.926 "method": "nvmf_subsystem_add_listener", 00:18:25.926 "params": { 00:18:25.926 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:18:25.926 "listen_address": { 00:18:25.926 "trtype": "TCP", 00:18:25.926 "adrfam": "IPv4", 00:18:25.926 "traddr": "10.0.0.2", 00:18:25.926 "trsvcid": "4420" 00:18:25.926 }, 00:18:25.926 "secure_channel": false, 00:18:25.926 "sock_impl": "ssl" 00:18:25.926 } 00:18:25.926 } 00:18:25.926 ] 00:18:25.926 } 00:18:25.926 ] 00:18:25.926 }' 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=2900082 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 2900082 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2900082 ']' 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:25.926 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:25.926 12:19:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:25.926 [2024-07-26 12:19:19.139013] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:25.926 [2024-07-26 12:19:19.139131] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:25.926 EAL: No free 2048 kB hugepages reported on node 1 00:18:26.185 [2024-07-26 12:19:19.207370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:26.185 [2024-07-26 12:19:19.321863] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:26.185 [2024-07-26 12:19:19.321931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:26.185 [2024-07-26 12:19:19.321956] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:26.185 [2024-07-26 12:19:19.321969] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:26.185 [2024-07-26 12:19:19.321981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:26.185 [2024-07-26 12:19:19.322085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:26.443 [2024-07-26 12:19:19.563130] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.443 [2024-07-26 12:19:19.602845] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.443 [2024-07-26 12:19:19.603118] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=2900233 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 2900233 /var/tmp/bdevperf.sock 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@831 -- # '[' -z 2900233 ']' 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:27.009 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:18:27.009 "subsystems": [ 00:18:27.009 { 00:18:27.009 "subsystem": "keyring", 00:18:27.009 "config": [ 00:18:27.009 { 00:18:27.009 "method": "keyring_file_add_key", 00:18:27.009 "params": { 00:18:27.009 "name": "key0", 00:18:27.009 "path": "/tmp/tmp.gZbaxJR2Za" 00:18:27.009 } 00:18:27.009 } 00:18:27.009 ] 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "subsystem": "iobuf", 00:18:27.009 "config": [ 00:18:27.009 { 00:18:27.009 "method": "iobuf_set_options", 00:18:27.009 "params": { 00:18:27.009 "small_pool_count": 8192, 00:18:27.009 "large_pool_count": 1024, 00:18:27.009 "small_bufsize": 8192, 00:18:27.009 "large_bufsize": 135168 00:18:27.009 } 00:18:27.009 } 00:18:27.009 ] 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "subsystem": "sock", 00:18:27.009 "config": [ 00:18:27.009 { 00:18:27.009 "method": "sock_set_default_impl", 00:18:27.009 "params": { 00:18:27.009 "impl_name": "posix" 00:18:27.009 } 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "method": "sock_impl_set_options", 00:18:27.009 "params": { 00:18:27.009 "impl_name": "ssl", 00:18:27.009 "recv_buf_size": 4096, 00:18:27.009 "send_buf_size": 4096, 00:18:27.009 "enable_recv_pipe": true, 00:18:27.009 "enable_quickack": false, 00:18:27.009 "enable_placement_id": 0, 00:18:27.009 "enable_zerocopy_send_server": true, 00:18:27.009 "enable_zerocopy_send_client": false, 00:18:27.009 "zerocopy_threshold": 0, 00:18:27.009 "tls_version": 0, 00:18:27.009 "enable_ktls": false 00:18:27.009 } 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "method": "sock_impl_set_options", 00:18:27.009 "params": { 00:18:27.009 "impl_name": "posix", 00:18:27.009 "recv_buf_size": 2097152, 00:18:27.009 "send_buf_size": 2097152, 00:18:27.009 "enable_recv_pipe": true, 00:18:27.009 "enable_quickack": false, 00:18:27.009 "enable_placement_id": 0, 00:18:27.009 "enable_zerocopy_send_server": true, 00:18:27.009 "enable_zerocopy_send_client": false, 00:18:27.009 "zerocopy_threshold": 0, 00:18:27.009 "tls_version": 0, 00:18:27.009 "enable_ktls": false 00:18:27.009 } 00:18:27.009 } 00:18:27.009 ] 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "subsystem": "vmd", 00:18:27.009 "config": [] 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "subsystem": "accel", 00:18:27.009 "config": [ 00:18:27.009 { 00:18:27.009 "method": "accel_set_options", 00:18:27.009 "params": { 00:18:27.009 "small_cache_size": 128, 00:18:27.009 "large_cache_size": 16, 00:18:27.009 "task_count": 2048, 00:18:27.009 "sequence_count": 2048, 00:18:27.009 "buf_count": 2048 00:18:27.009 } 00:18:27.009 } 00:18:27.009 ] 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "subsystem": "bdev", 00:18:27.009 "config": [ 00:18:27.009 { 00:18:27.009 "method": "bdev_set_options", 00:18:27.009 "params": { 00:18:27.009 "bdev_io_pool_size": 65535, 00:18:27.009 "bdev_io_cache_size": 256, 00:18:27.009 "bdev_auto_examine": true, 00:18:27.009 "iobuf_small_cache_size": 128, 00:18:27.009 "iobuf_large_cache_size": 16 00:18:27.009 } 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "method": "bdev_raid_set_options", 00:18:27.009 "params": { 00:18:27.009 "process_window_size_kb": 1024, 00:18:27.009 "process_max_bandwidth_mb_sec": 0 00:18:27.009 } 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "method": "bdev_iscsi_set_options", 00:18:27.009 "params": { 00:18:27.009 "timeout_sec": 30 00:18:27.009 } 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "method": "bdev_nvme_set_options", 00:18:27.009 "params": { 00:18:27.009 "action_on_timeout": "none", 00:18:27.009 "timeout_us": 0, 00:18:27.009 "timeout_admin_us": 0, 00:18:27.009 "keep_alive_timeout_ms": 10000, 00:18:27.009 "arbitration_burst": 0, 00:18:27.009 "low_priority_weight": 0, 00:18:27.009 "medium_priority_weight": 0, 00:18:27.009 "high_priority_weight": 0, 00:18:27.009 "nvme_adminq_poll_period_us": 10000, 00:18:27.009 "nvme_ioq_poll_period_us": 0, 00:18:27.009 "io_queue_requests": 512, 00:18:27.009 "delay_cmd_submit": true, 00:18:27.009 "transport_retry_count": 4, 00:18:27.009 "bdev_retry_count": 3, 00:18:27.009 "transport_ack_timeout": 0, 00:18:27.009 "ctrlr_loss_timeout_sec": 0, 00:18:27.009 "reconnect_delay_sec": 0, 00:18:27.009 "fast_io_fail_timeout_sec": 0, 00:18:27.009 "disable_auto_failback": false, 00:18:27.009 "generate_uuids": false, 00:18:27.009 "transport_tos": 0, 00:18:27.009 "nvme_error_stat": false, 00:18:27.009 "rdma_srq_size": 0, 00:18:27.009 "io_path_stat": false, 00:18:27.009 "allow_accel_sequence": false, 00:18:27.009 "rdma_max_cq_size": 0, 00:18:27.009 "rdma_cm_event_timeout_ms": 0, 00:18:27.009 "dhchap_digests": [ 00:18:27.009 "sha256", 00:18:27.009 "sha384", 00:18:27.009 "sha512" 00:18:27.009 ], 00:18:27.009 "dhchap_dhgroups": [ 00:18:27.009 "null", 00:18:27.009 "ffdhe2048", 00:18:27.009 "ffdhe3072", 00:18:27.009 "ffdhe4096", 00:18:27.009 "ffdhe6144", 00:18:27.009 "ffdhe8192" 00:18:27.009 ] 00:18:27.009 } 00:18:27.009 }, 00:18:27.009 { 00:18:27.009 "method": "bdev_nvme_attach_controller", 00:18:27.009 "params": { 00:18:27.009 "name": "nvme0", 00:18:27.009 "trtype": "TCP", 00:18:27.009 "adrfam": "IPv4", 00:18:27.009 "traddr": "10.0.0.2", 00:18:27.009 "trsvcid": "4420", 00:18:27.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:27.009 "prchk_reftag": false, 00:18:27.009 "prchk_guard": false, 00:18:27.009 "ctrlr_loss_timeout_sec": 0, 00:18:27.009 "reconnect_delay_sec": 0, 00:18:27.009 "fast_io_fail_timeout_sec": 0, 00:18:27.009 "psk": "key0", 00:18:27.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:27.009 "hdgst": false, 00:18:27.009 "ddgst": false 00:18:27.009 } 00:18:27.009 }, 00:18:27.010 { 00:18:27.010 "method": "bdev_nvme_set_hotplug", 00:18:27.010 "params": { 00:18:27.010 "period_us": 100000, 00:18:27.010 "enable": false 00:18:27.010 } 00:18:27.010 }, 00:18:27.010 { 00:18:27.010 "method": "bdev_enable_histogram", 00:18:27.010 "params": { 00:18:27.010 "name": "nvme0n1", 00:18:27.010 "enable": true 00:18:27.010 } 00:18:27.010 }, 00:18:27.010 { 00:18:27.010 "method": "bdev_wait_for_examine" 00:18:27.010 } 00:18:27.010 ] 00:18:27.010 }, 00:18:27.010 { 00:18:27.010 "subsystem": "nbd", 00:18:27.010 "config": [] 00:18:27.010 } 00:18:27.010 ] 00:18:27.010 }' 00:18:27.010 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:27.010 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:27.010 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:27.010 12:19:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:27.010 [2024-07-26 12:19:20.155282] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:27.010 [2024-07-26 12:19:20.155373] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2900233 ] 00:18:27.010 EAL: No free 2048 kB hugepages reported on node 1 00:18:27.010 [2024-07-26 12:19:20.217989] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:27.269 [2024-07-26 12:19:20.335534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:27.269 [2024-07-26 12:19:20.520640] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:28.204 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:28.204 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # return 0 00:18:28.204 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:18:28.204 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:18:28.204 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:28.204 12:19:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:28.204 Running I/O for 1 seconds... 00:18:29.581 00:18:29.581 Latency(us) 00:18:29.581 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.581 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:18:29.581 Verification LBA range: start 0x0 length 0x2000 00:18:29.581 nvme0n1 : 1.04 2806.53 10.96 0.00 0.00 44769.41 9854.67 75730.49 00:18:29.581 =================================================================================================================== 00:18:29.581 Total : 2806.53 10.96 0.00 0.00 44769.41 9854.67 75730.49 00:18:29.581 0 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # type=--id 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@809 -- # id=0 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:29.582 nvmf_trace.0 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@823 -- # return 0 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 2900233 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2900233 ']' 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2900233 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2900233 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2900233' 00:18:29.582 killing process with pid 2900233 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2900233 00:18:29.582 Received shutdown signal, test time was about 1.000000 seconds 00:18:29.582 00:18:29.582 Latency(us) 00:18:29.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:29.582 =================================================================================================================== 00:18:29.582 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:29.582 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2900233 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:29.841 rmmod nvme_tcp 00:18:29.841 rmmod nvme_fabrics 00:18:29.841 rmmod nvme_keyring 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 2900082 ']' 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 2900082 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@950 -- # '[' -z 2900082 ']' 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # kill -0 2900082 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # uname 00:18:29.841 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:29.842 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2900082 00:18:29.842 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:18:29.842 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:18:29.842 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2900082' 00:18:29.842 killing process with pid 2900082 00:18:29.842 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@969 -- # kill 2900082 00:18:29.842 12:19:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@974 -- # wait 2900082 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:30.103 12:19:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.010 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:32.010 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.xag5lhR2SI /tmp/tmp.PyR7ruTrXk /tmp/tmp.gZbaxJR2Za 00:18:32.010 00:18:32.010 real 1m20.803s 00:18:32.010 user 2m6.761s 00:18:32.010 sys 0m27.704s 00:18:32.010 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:32.010 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:32.010 ************************************ 00:18:32.010 END TEST nvmf_tls 00:18:32.010 ************************************ 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:32.271 ************************************ 00:18:32.271 START TEST nvmf_fips 00:18:32.271 ************************************ 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:18:32.271 * Looking for test storage... 00:18:32.271 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:32.271 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # local es=0 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # valid_exec_arg openssl md5 /dev/fd/62 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@638 -- # local arg=openssl 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -t openssl 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -P openssl 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # arg=/usr/bin/openssl 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # [[ -x /usr/bin/openssl ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # openssl md5 /dev/fd/62 00:18:32.272 Error setting digest 00:18:32.272 00D2353C827F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:18:32.272 00D2353C827F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@653 -- # es=1 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:32.272 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:32.530 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:32.530 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:32.530 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:18:32.530 12:19:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:34.435 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:34.435 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:34.435 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:34.435 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:34.435 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:34.436 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:34.436 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:18:34.436 00:18:34.436 --- 10.0.0.2 ping statistics --- 00:18:34.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.436 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:34.436 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:34.436 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.205 ms 00:18:34.436 00:18:34.436 --- 10.0.0.1 ping statistics --- 00:18:34.436 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:34.436 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@724 -- # xtrace_disable 00:18:34.436 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=2902591 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 2902591 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2902591 ']' 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:34.694 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:34.694 12:19:27 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:34.694 [2024-07-26 12:19:27.761024] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:34.694 [2024-07-26 12:19:27.761128] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:34.694 EAL: No free 2048 kB hugepages reported on node 1 00:18:34.694 [2024-07-26 12:19:27.824110] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:34.694 [2024-07-26 12:19:27.929018] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:34.694 [2024-07-26 12:19:27.929089] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:34.694 [2024-07-26 12:19:27.929130] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:34.694 [2024-07-26 12:19:27.929142] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:34.694 [2024-07-26 12:19:27.929153] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:34.694 [2024-07-26 12:19:27.929184] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@730 -- # xtrace_disable 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:18:35.640 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:35.641 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.641 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:18:35.641 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.641 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.641 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:35.641 12:19:28 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:35.899 [2024-07-26 12:19:29.015212] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:35.899 [2024-07-26 12:19:29.031196] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:35.899 [2024-07-26 12:19:29.031404] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:35.899 [2024-07-26 12:19:29.063045] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:35.899 malloc0 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=2902752 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 2902752 /var/tmp/bdevperf.sock 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@831 -- # '[' -z 2902752 ']' 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # local max_retries=100 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:35.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # xtrace_disable 00:18:35.899 12:19:29 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:36.157 [2024-07-26 12:19:29.155871] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:18:36.157 [2024-07-26 12:19:29.155959] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2902752 ] 00:18:36.157 EAL: No free 2048 kB hugepages reported on node 1 00:18:36.157 [2024-07-26 12:19:29.215324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:36.157 [2024-07-26 12:19:29.323727] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:37.093 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:18:37.093 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # return 0 00:18:37.093 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:37.352 [2024-07-26 12:19:30.390879] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:37.352 [2024-07-26 12:19:30.390998] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:37.352 TLSTESTn1 00:18:37.352 12:19:30 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:18:37.352 Running I/O for 10 seconds... 00:18:49.572 00:18:49.572 Latency(us) 00:18:49.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.572 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:49.572 Verification LBA range: start 0x0 length 0x2000 00:18:49.572 TLSTESTn1 : 10.04 3196.14 12.48 0.00 0.00 39952.02 6262.33 60972.75 00:18:49.572 =================================================================================================================== 00:18:49.572 Total : 3196.14 12.48 0.00 0.00 39952.02 6262.33 60972.75 00:18:49.572 0 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # type=--id 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@809 -- # id=0 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@810 -- # '[' --id = --pid ']' 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # shm_files=nvmf_trace.0 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@816 -- # [[ -z nvmf_trace.0 ]] 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # for n in $shm_files 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:18:49.572 nvmf_trace.0 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@823 -- # return 0 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 2902752 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2902752 ']' 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2902752 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2902752 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2902752' 00:18:49.572 killing process with pid 2902752 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2902752 00:18:49.572 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.572 00:18:49.572 Latency(us) 00:18:49.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.572 =================================================================================================================== 00:18:49.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:49.572 [2024-07-26 12:19:40.785594] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:18:49.572 12:19:40 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2902752 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:49.572 rmmod nvme_tcp 00:18:49.572 rmmod nvme_fabrics 00:18:49.572 rmmod nvme_keyring 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 2902591 ']' 00:18:49.572 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 2902591 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@950 -- # '[' -z 2902591 ']' 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # kill -0 2902591 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # uname 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2902591 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2902591' 00:18:49.573 killing process with pid 2902591 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@969 -- # kill 2902591 00:18:49.573 [2024-07-26 12:19:41.131943] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@974 -- # wait 2902591 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:49.573 12:19:41 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:18:50.550 00:18:50.550 real 0m18.140s 00:18:50.550 user 0m20.610s 00:18:50.550 sys 0m7.021s 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1126 -- # xtrace_disable 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:18:50.550 ************************************ 00:18:50.550 END TEST nvmf_fips 00:18:50.550 ************************************ 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:18:50.550 12:19:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:52.453 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:52.453 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:52.453 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:52.453 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:52.453 ************************************ 00:18:52.453 START TEST nvmf_perf_adq 00:18:52.453 ************************************ 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:18:52.453 * Looking for test storage... 00:18:52.453 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:52.453 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:18:52.454 12:19:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:18:54.356 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:54.356 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:18:54.356 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:54.356 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:54.356 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:18:54.357 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:18:54.357 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:18:54.357 Found net devices under 0000:0a:00.0: cvl_0_0 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:18:54.357 Found net devices under 0000:0a:00.1: cvl_0_1 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:18:54.357 12:19:47 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:18:54.923 12:19:48 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:18:56.831 12:19:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:02.107 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:02.107 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:02.107 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:02.107 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:02.107 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:02.108 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:02.108 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:19:02.108 00:19:02.108 --- 10.0.0.2 ping statistics --- 00:19:02.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.108 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:02.108 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:02.108 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.146 ms 00:19:02.108 00:19:02.108 --- 10.0.0.1 ping statistics --- 00:19:02.108 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:02.108 rtt min/avg/max/mdev = 0.146/0.146/0.146/0.000 ms 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2908626 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2908626 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2908626 ']' 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:02.108 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:02.108 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.108 [2024-07-26 12:19:55.250745] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:02.108 [2024-07-26 12:19:55.250832] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:02.108 EAL: No free 2048 kB hugepages reported on node 1 00:19:02.108 [2024-07-26 12:19:55.315600] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:02.368 [2024-07-26 12:19:55.424892] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:02.368 [2024-07-26 12:19:55.424959] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:02.368 [2024-07-26 12:19:55.424991] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:02.368 [2024-07-26 12:19:55.425003] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:02.368 [2024-07-26 12:19:55.425012] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:02.368 [2024-07-26 12:19:55.425098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:02.368 [2024-07-26 12:19:55.425166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:02.368 [2024-07-26 12:19:55.425232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:02.368 [2024-07-26 12:19:55.425235] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.368 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.627 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.627 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:19:02.627 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.627 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.627 [2024-07-26 12:19:55.644748] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:02.627 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.628 Malloc1 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:02.628 [2024-07-26 12:19:55.698134] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=2908660 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:19:02.628 12:19:55 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:02.628 EAL: No free 2048 kB hugepages reported on node 1 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:19:04.528 "tick_rate": 2700000000, 00:19:04.528 "poll_groups": [ 00:19:04.528 { 00:19:04.528 "name": "nvmf_tgt_poll_group_000", 00:19:04.528 "admin_qpairs": 1, 00:19:04.528 "io_qpairs": 1, 00:19:04.528 "current_admin_qpairs": 1, 00:19:04.528 "current_io_qpairs": 1, 00:19:04.528 "pending_bdev_io": 0, 00:19:04.528 "completed_nvme_io": 19858, 00:19:04.528 "transports": [ 00:19:04.528 { 00:19:04.528 "trtype": "TCP" 00:19:04.528 } 00:19:04.528 ] 00:19:04.528 }, 00:19:04.528 { 00:19:04.528 "name": "nvmf_tgt_poll_group_001", 00:19:04.528 "admin_qpairs": 0, 00:19:04.528 "io_qpairs": 1, 00:19:04.528 "current_admin_qpairs": 0, 00:19:04.528 "current_io_qpairs": 1, 00:19:04.528 "pending_bdev_io": 0, 00:19:04.528 "completed_nvme_io": 20417, 00:19:04.528 "transports": [ 00:19:04.528 { 00:19:04.528 "trtype": "TCP" 00:19:04.528 } 00:19:04.528 ] 00:19:04.528 }, 00:19:04.528 { 00:19:04.528 "name": "nvmf_tgt_poll_group_002", 00:19:04.528 "admin_qpairs": 0, 00:19:04.528 "io_qpairs": 1, 00:19:04.528 "current_admin_qpairs": 0, 00:19:04.528 "current_io_qpairs": 1, 00:19:04.528 "pending_bdev_io": 0, 00:19:04.528 "completed_nvme_io": 19648, 00:19:04.528 "transports": [ 00:19:04.528 { 00:19:04.528 "trtype": "TCP" 00:19:04.528 } 00:19:04.528 ] 00:19:04.528 }, 00:19:04.528 { 00:19:04.528 "name": "nvmf_tgt_poll_group_003", 00:19:04.528 "admin_qpairs": 0, 00:19:04.528 "io_qpairs": 1, 00:19:04.528 "current_admin_qpairs": 0, 00:19:04.528 "current_io_qpairs": 1, 00:19:04.528 "pending_bdev_io": 0, 00:19:04.528 "completed_nvme_io": 19967, 00:19:04.528 "transports": [ 00:19:04.528 { 00:19:04.528 "trtype": "TCP" 00:19:04.528 } 00:19:04.528 ] 00:19:04.528 } 00:19:04.528 ] 00:19:04.528 }' 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:19:04.528 12:19:57 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 2908660 00:19:12.651 Initializing NVMe Controllers 00:19:12.651 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:12.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:12.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:12.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:12.651 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:12.651 Initialization complete. Launching workers. 00:19:12.651 ======================================================== 00:19:12.651 Latency(us) 00:19:12.651 Device Information : IOPS MiB/s Average min max 00:19:12.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10504.10 41.03 6093.98 2499.94 8551.65 00:19:12.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10781.00 42.11 5936.89 3946.61 7507.63 00:19:12.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10335.50 40.37 6193.25 2120.52 9504.09 00:19:12.651 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10487.70 40.97 6102.94 2043.98 9491.47 00:19:12.651 ======================================================== 00:19:12.651 Total : 42108.30 164.49 6080.36 2043.98 9504.09 00:19:12.651 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:12.651 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:12.651 rmmod nvme_tcp 00:19:12.651 rmmod nvme_fabrics 00:19:12.651 rmmod nvme_keyring 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2908626 ']' 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2908626 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2908626 ']' 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2908626 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2908626 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2908626' 00:19:12.909 killing process with pid 2908626 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2908626 00:19:12.909 12:20:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2908626 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:13.168 12:20:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:15.074 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:15.074 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:19:15.075 12:20:08 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:19:16.040 12:20:09 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:19:17.951 12:20:10 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:23.229 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:23.229 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:23.229 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:23.230 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:23.230 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:23.230 12:20:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:23.230 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:23.230 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:19:23.230 00:19:23.230 --- 10.0.0.2 ping statistics --- 00:19:23.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.230 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:23.230 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:23.230 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.132 ms 00:19:23.230 00:19:23.230 --- 10.0.0.1 ping statistics --- 00:19:23.230 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:23.230 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:19:23.230 net.core.busy_poll = 1 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:19:23.230 net.core.busy_read = 1 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=2911275 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 2911275 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@831 -- # '[' -z 2911275 ']' 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.230 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:23.230 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.230 [2024-07-26 12:20:16.337949] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:23.230 [2024-07-26 12:20:16.338036] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:23.230 EAL: No free 2048 kB hugepages reported on node 1 00:19:23.230 [2024-07-26 12:20:16.404214] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:23.491 [2024-07-26 12:20:16.515744] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:23.491 [2024-07-26 12:20:16.515798] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:23.491 [2024-07-26 12:20:16.515811] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:23.491 [2024-07-26 12:20:16.515838] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:23.491 [2024-07-26 12:20:16.515848] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:23.491 [2024-07-26 12:20:16.515930] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:23.491 [2024-07-26 12:20:16.515997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:23.491 [2024-07-26 12:20:16.516071] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:23.491 [2024-07-26 12:20:16.516077] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # return 0 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.491 [2024-07-26 12:20:16.721674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.491 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.750 Malloc1 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:23.750 [2024-07-26 12:20:16.775568] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=2911426 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:19:23.750 12:20:16 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:19:23.750 EAL: No free 2048 kB hugepages reported on node 1 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:19:25.656 "tick_rate": 2700000000, 00:19:25.656 "poll_groups": [ 00:19:25.656 { 00:19:25.656 "name": "nvmf_tgt_poll_group_000", 00:19:25.656 "admin_qpairs": 1, 00:19:25.656 "io_qpairs": 2, 00:19:25.656 "current_admin_qpairs": 1, 00:19:25.656 "current_io_qpairs": 2, 00:19:25.656 "pending_bdev_io": 0, 00:19:25.656 "completed_nvme_io": 27334, 00:19:25.656 "transports": [ 00:19:25.656 { 00:19:25.656 "trtype": "TCP" 00:19:25.656 } 00:19:25.656 ] 00:19:25.656 }, 00:19:25.656 { 00:19:25.656 "name": "nvmf_tgt_poll_group_001", 00:19:25.656 "admin_qpairs": 0, 00:19:25.656 "io_qpairs": 2, 00:19:25.656 "current_admin_qpairs": 0, 00:19:25.656 "current_io_qpairs": 2, 00:19:25.656 "pending_bdev_io": 0, 00:19:25.656 "completed_nvme_io": 20730, 00:19:25.656 "transports": [ 00:19:25.656 { 00:19:25.656 "trtype": "TCP" 00:19:25.656 } 00:19:25.656 ] 00:19:25.656 }, 00:19:25.656 { 00:19:25.656 "name": "nvmf_tgt_poll_group_002", 00:19:25.656 "admin_qpairs": 0, 00:19:25.656 "io_qpairs": 0, 00:19:25.656 "current_admin_qpairs": 0, 00:19:25.656 "current_io_qpairs": 0, 00:19:25.656 "pending_bdev_io": 0, 00:19:25.656 "completed_nvme_io": 0, 00:19:25.656 "transports": [ 00:19:25.656 { 00:19:25.656 "trtype": "TCP" 00:19:25.656 } 00:19:25.656 ] 00:19:25.656 }, 00:19:25.656 { 00:19:25.656 "name": "nvmf_tgt_poll_group_003", 00:19:25.656 "admin_qpairs": 0, 00:19:25.656 "io_qpairs": 0, 00:19:25.656 "current_admin_qpairs": 0, 00:19:25.656 "current_io_qpairs": 0, 00:19:25.656 "pending_bdev_io": 0, 00:19:25.656 "completed_nvme_io": 0, 00:19:25.656 "transports": [ 00:19:25.656 { 00:19:25.656 "trtype": "TCP" 00:19:25.656 } 00:19:25.656 ] 00:19:25.656 } 00:19:25.656 ] 00:19:25.656 }' 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:19:25.656 12:20:18 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 2911426 00:19:33.779 Initializing NVMe Controllers 00:19:33.779 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:19:33.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:19:33.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:19:33.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:19:33.779 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:19:33.779 Initialization complete. Launching workers. 00:19:33.779 ======================================================== 00:19:33.779 Latency(us) 00:19:33.779 Device Information : IOPS MiB/s Average min max 00:19:33.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 4777.90 18.66 13399.23 1814.23 59135.71 00:19:33.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7883.20 30.79 8118.92 1584.99 52334.35 00:19:33.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 6543.80 25.56 9781.49 2014.00 54512.16 00:19:33.779 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 5907.50 23.08 10837.33 1984.15 56107.91 00:19:33.779 ======================================================== 00:19:33.779 Total : 25112.39 98.10 10196.27 1584.99 59135.71 00:19:33.779 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:33.779 rmmod nvme_tcp 00:19:33.779 rmmod nvme_fabrics 00:19:33.779 rmmod nvme_keyring 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 2911275 ']' 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 2911275 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@950 -- # '[' -z 2911275 ']' 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # kill -0 2911275 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # uname 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:33.779 12:20:26 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2911275 00:19:33.779 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:33.779 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:33.779 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2911275' 00:19:33.779 killing process with pid 2911275 00:19:33.779 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@969 -- # kill 2911275 00:19:33.779 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@974 -- # wait 2911275 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:34.349 12:20:27 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:19:36.256 00:19:36.256 real 0m43.892s 00:19:36.256 user 2m29.699s 00:19:36.256 sys 0m13.371s 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:19:36.256 ************************************ 00:19:36.256 END TEST nvmf_perf_adq 00:19:36.256 ************************************ 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:36.256 ************************************ 00:19:36.256 START TEST nvmf_shutdown 00:19:36.256 ************************************ 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:19:36.256 * Looking for test storage... 00:19:36.256 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:36.256 ************************************ 00:19:36.256 START TEST nvmf_shutdown_tc1 00:19:36.256 ************************************ 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc1 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:36.256 12:20:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:38.793 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:38.793 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:38.793 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:38.793 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:38.793 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:38.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:38.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:19:38.794 00:19:38.794 --- 10.0.0.2 ping statistics --- 00:19:38.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.794 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:38.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:38.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.130 ms 00:19:38.794 00:19:38.794 --- 10.0.0.1 ping statistics --- 00:19:38.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:38.794 rtt min/avg/max/mdev = 0.130/0.130/0.130/0.000 ms 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=2914587 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 2914587 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2914587 ']' 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:38.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:38.794 12:20:31 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:38.794 [2024-07-26 12:20:31.815562] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:38.794 [2024-07-26 12:20:31.815642] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:38.794 EAL: No free 2048 kB hugepages reported on node 1 00:19:38.794 [2024-07-26 12:20:31.884538] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:38.794 [2024-07-26 12:20:32.000662] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:38.794 [2024-07-26 12:20:32.000718] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:38.794 [2024-07-26 12:20:32.000734] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:38.794 [2024-07-26 12:20:32.000748] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:38.794 [2024-07-26 12:20:32.000760] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:38.794 [2024-07-26 12:20:32.000855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:38.794 [2024-07-26 12:20:32.000948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:38.794 [2024-07-26 12:20:32.001015] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:38.794 [2024-07-26 12:20:32.001018] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.732 [2024-07-26 12:20:32.789692] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:39.732 12:20:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:39.732 Malloc1 00:19:39.732 [2024-07-26 12:20:32.864712] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:39.732 Malloc2 00:19:39.732 Malloc3 00:19:39.732 Malloc4 00:19:39.992 Malloc5 00:19:39.992 Malloc6 00:19:39.992 Malloc7 00:19:39.992 Malloc8 00:19:39.992 Malloc9 00:19:40.250 Malloc10 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=2914772 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 2914772 /var/tmp/bdevperf.sock 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@831 -- # '[' -z 2914772 ']' 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:40.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.250 "method": "bdev_nvme_attach_controller" 00:19:40.250 } 00:19:40.250 EOF 00:19:40.250 )") 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.250 "method": "bdev_nvme_attach_controller" 00:19:40.250 } 00:19:40.250 EOF 00:19:40.250 )") 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.250 "method": "bdev_nvme_attach_controller" 00:19:40.250 } 00:19:40.250 EOF 00:19:40.250 )") 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.250 "method": "bdev_nvme_attach_controller" 00:19:40.250 } 00:19:40.250 EOF 00:19:40.250 )") 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.250 "method": "bdev_nvme_attach_controller" 00:19:40.250 } 00:19:40.250 EOF 00:19:40.250 )") 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.250 "method": "bdev_nvme_attach_controller" 00:19:40.250 } 00:19:40.250 EOF 00:19:40.250 )") 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.250 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.250 { 00:19:40.250 "params": { 00:19:40.250 "name": "Nvme$subsystem", 00:19:40.250 "trtype": "$TEST_TRANSPORT", 00:19:40.250 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.250 "adrfam": "ipv4", 00:19:40.250 "trsvcid": "$NVMF_PORT", 00:19:40.250 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.250 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.250 "hdgst": ${hdgst:-false}, 00:19:40.250 "ddgst": ${ddgst:-false} 00:19:40.250 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 } 00:19:40.251 EOF 00:19:40.251 )") 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.251 { 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme$subsystem", 00:19:40.251 "trtype": "$TEST_TRANSPORT", 00:19:40.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "$NVMF_PORT", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.251 "hdgst": ${hdgst:-false}, 00:19:40.251 "ddgst": ${ddgst:-false} 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 } 00:19:40.251 EOF 00:19:40.251 )") 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.251 { 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme$subsystem", 00:19:40.251 "trtype": "$TEST_TRANSPORT", 00:19:40.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "$NVMF_PORT", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.251 "hdgst": ${hdgst:-false}, 00:19:40.251 "ddgst": ${ddgst:-false} 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 } 00:19:40.251 EOF 00:19:40.251 )") 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:40.251 { 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme$subsystem", 00:19:40.251 "trtype": "$TEST_TRANSPORT", 00:19:40.251 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "$NVMF_PORT", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:40.251 "hdgst": ${hdgst:-false}, 00:19:40.251 "ddgst": ${ddgst:-false} 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 } 00:19:40.251 EOF 00:19:40.251 )") 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:40.251 12:20:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme1", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme2", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme3", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme4", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme5", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme6", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme7", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme8", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme9", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 },{ 00:19:40.251 "params": { 00:19:40.251 "name": "Nvme10", 00:19:40.251 "trtype": "tcp", 00:19:40.251 "traddr": "10.0.0.2", 00:19:40.251 "adrfam": "ipv4", 00:19:40.251 "trsvcid": "4420", 00:19:40.251 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:40.251 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:40.251 "hdgst": false, 00:19:40.251 "ddgst": false 00:19:40.251 }, 00:19:40.251 "method": "bdev_nvme_attach_controller" 00:19:40.251 }' 00:19:40.251 [2024-07-26 12:20:33.367569] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:40.251 [2024-07-26 12:20:33.367641] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:19:40.251 EAL: No free 2048 kB hugepages reported on node 1 00:19:40.251 [2024-07-26 12:20:33.430395] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:40.508 [2024-07-26 12:20:33.540534] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # return 0 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 2914772 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:19:42.439 12:20:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:19:43.377 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 2914772 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 2914587 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.377 { 00:19:43.377 "params": { 00:19:43.377 "name": "Nvme$subsystem", 00:19:43.377 "trtype": "$TEST_TRANSPORT", 00:19:43.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.377 "adrfam": "ipv4", 00:19:43.377 "trsvcid": "$NVMF_PORT", 00:19:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.377 "hdgst": ${hdgst:-false}, 00:19:43.377 "ddgst": ${ddgst:-false} 00:19:43.377 }, 00:19:43.377 "method": "bdev_nvme_attach_controller" 00:19:43.377 } 00:19:43.377 EOF 00:19:43.377 )") 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.377 { 00:19:43.377 "params": { 00:19:43.377 "name": "Nvme$subsystem", 00:19:43.377 "trtype": "$TEST_TRANSPORT", 00:19:43.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.377 "adrfam": "ipv4", 00:19:43.377 "trsvcid": "$NVMF_PORT", 00:19:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.377 "hdgst": ${hdgst:-false}, 00:19:43.377 "ddgst": ${ddgst:-false} 00:19:43.377 }, 00:19:43.377 "method": "bdev_nvme_attach_controller" 00:19:43.377 } 00:19:43.377 EOF 00:19:43.377 )") 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.377 { 00:19:43.377 "params": { 00:19:43.377 "name": "Nvme$subsystem", 00:19:43.377 "trtype": "$TEST_TRANSPORT", 00:19:43.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.377 "adrfam": "ipv4", 00:19:43.377 "trsvcid": "$NVMF_PORT", 00:19:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.377 "hdgst": ${hdgst:-false}, 00:19:43.377 "ddgst": ${ddgst:-false} 00:19:43.377 }, 00:19:43.377 "method": "bdev_nvme_attach_controller" 00:19:43.377 } 00:19:43.377 EOF 00:19:43.377 )") 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.377 { 00:19:43.377 "params": { 00:19:43.377 "name": "Nvme$subsystem", 00:19:43.377 "trtype": "$TEST_TRANSPORT", 00:19:43.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.377 "adrfam": "ipv4", 00:19:43.377 "trsvcid": "$NVMF_PORT", 00:19:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.377 "hdgst": ${hdgst:-false}, 00:19:43.377 "ddgst": ${ddgst:-false} 00:19:43.377 }, 00:19:43.377 "method": "bdev_nvme_attach_controller" 00:19:43.377 } 00:19:43.377 EOF 00:19:43.377 )") 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.377 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.377 { 00:19:43.377 "params": { 00:19:43.377 "name": "Nvme$subsystem", 00:19:43.377 "trtype": "$TEST_TRANSPORT", 00:19:43.377 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.377 "adrfam": "ipv4", 00:19:43.377 "trsvcid": "$NVMF_PORT", 00:19:43.377 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.377 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.378 "hdgst": ${hdgst:-false}, 00:19:43.378 "ddgst": ${ddgst:-false} 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 } 00:19:43.378 EOF 00:19:43.378 )") 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.378 { 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme$subsystem", 00:19:43.378 "trtype": "$TEST_TRANSPORT", 00:19:43.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "$NVMF_PORT", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.378 "hdgst": ${hdgst:-false}, 00:19:43.378 "ddgst": ${ddgst:-false} 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 } 00:19:43.378 EOF 00:19:43.378 )") 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.378 { 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme$subsystem", 00:19:43.378 "trtype": "$TEST_TRANSPORT", 00:19:43.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "$NVMF_PORT", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.378 "hdgst": ${hdgst:-false}, 00:19:43.378 "ddgst": ${ddgst:-false} 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 } 00:19:43.378 EOF 00:19:43.378 )") 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.378 { 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme$subsystem", 00:19:43.378 "trtype": "$TEST_TRANSPORT", 00:19:43.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "$NVMF_PORT", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.378 "hdgst": ${hdgst:-false}, 00:19:43.378 "ddgst": ${ddgst:-false} 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 } 00:19:43.378 EOF 00:19:43.378 )") 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.378 { 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme$subsystem", 00:19:43.378 "trtype": "$TEST_TRANSPORT", 00:19:43.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "$NVMF_PORT", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.378 "hdgst": ${hdgst:-false}, 00:19:43.378 "ddgst": ${ddgst:-false} 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 } 00:19:43.378 EOF 00:19:43.378 )") 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:43.378 { 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme$subsystem", 00:19:43.378 "trtype": "$TEST_TRANSPORT", 00:19:43.378 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "$NVMF_PORT", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:43.378 "hdgst": ${hdgst:-false}, 00:19:43.378 "ddgst": ${ddgst:-false} 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 } 00:19:43.378 EOF 00:19:43.378 )") 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:19:43.378 12:20:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme1", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme2", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme3", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme4", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme5", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme6", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme7", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme8", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.378 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:43.378 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:43.378 "hdgst": false, 00:19:43.378 "ddgst": false 00:19:43.378 }, 00:19:43.378 "method": "bdev_nvme_attach_controller" 00:19:43.378 },{ 00:19:43.378 "params": { 00:19:43.378 "name": "Nvme9", 00:19:43.378 "trtype": "tcp", 00:19:43.378 "traddr": "10.0.0.2", 00:19:43.378 "adrfam": "ipv4", 00:19:43.378 "trsvcid": "4420", 00:19:43.379 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:43.379 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:43.379 "hdgst": false, 00:19:43.379 "ddgst": false 00:19:43.379 }, 00:19:43.379 "method": "bdev_nvme_attach_controller" 00:19:43.379 },{ 00:19:43.379 "params": { 00:19:43.379 "name": "Nvme10", 00:19:43.379 "trtype": "tcp", 00:19:43.379 "traddr": "10.0.0.2", 00:19:43.379 "adrfam": "ipv4", 00:19:43.379 "trsvcid": "4420", 00:19:43.379 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:43.379 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:43.379 "hdgst": false, 00:19:43.379 "ddgst": false 00:19:43.379 }, 00:19:43.379 "method": "bdev_nvme_attach_controller" 00:19:43.379 }' 00:19:43.379 [2024-07-26 12:20:36.432102] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:43.379 [2024-07-26 12:20:36.432192] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2915195 ] 00:19:43.379 EAL: No free 2048 kB hugepages reported on node 1 00:19:43.379 [2024-07-26 12:20:36.496290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:43.379 [2024-07-26 12:20:36.605857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:44.757 Running I/O for 1 seconds... 00:19:46.135 00:19:46.135 Latency(us) 00:19:46.135 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:46.135 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme1n1 : 1.02 195.16 12.20 0.00 0.00 320690.95 8009.96 276513.37 00:19:46.135 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme2n1 : 1.16 285.00 17.81 0.00 0.00 218137.97 4878.79 246997.90 00:19:46.135 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme3n1 : 1.11 231.13 14.45 0.00 0.00 264506.03 20194.80 256318.58 00:19:46.135 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme4n1 : 1.17 274.63 17.16 0.00 0.00 219790.75 19029.71 253211.69 00:19:46.135 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme5n1 : 1.17 217.94 13.62 0.00 0.00 272682.86 22233.69 271853.04 00:19:46.135 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme6n1 : 1.12 229.34 14.33 0.00 0.00 253354.48 21262.79 253211.69 00:19:46.135 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme7n1 : 1.14 224.38 14.02 0.00 0.00 250420.34 23981.32 250104.79 00:19:46.135 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme8n1 : 1.15 222.43 13.90 0.00 0.00 253070.03 21554.06 251658.24 00:19:46.135 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.135 Verification LBA range: start 0x0 length 0x400 00:19:46.135 Nvme9n1 : 1.18 217.32 13.58 0.00 0.00 255473.78 22330.79 287387.50 00:19:46.136 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:46.136 Verification LBA range: start 0x0 length 0x400 00:19:46.136 Nvme10n1 : 1.18 270.38 16.90 0.00 0.00 202083.29 18350.08 251658.24 00:19:46.136 =================================================================================================================== 00:19:46.136 Total : 2367.73 147.98 0.00 0.00 246783.94 4878.79 287387.50 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:46.394 rmmod nvme_tcp 00:19:46.394 rmmod nvme_fabrics 00:19:46.394 rmmod nvme_keyring 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 2914587 ']' 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 2914587 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@950 -- # '[' -z 2914587 ']' 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # kill -0 2914587 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # uname 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2914587 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2914587' 00:19:46.394 killing process with pid 2914587 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@969 -- # kill 2914587 00:19:46.394 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@974 -- # wait 2914587 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:46.961 12:20:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:48.865 00:19:48.865 real 0m12.520s 00:19:48.865 user 0m36.242s 00:19:48.865 sys 0m3.424s 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:19:48.865 ************************************ 00:19:48.865 END TEST nvmf_shutdown_tc1 00:19:48.865 ************************************ 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:48.865 ************************************ 00:19:48.865 START TEST nvmf_shutdown_tc2 00:19:48.865 ************************************ 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc2 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:19:48.865 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:48.866 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:48.866 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:48.866 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:48.866 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:48.866 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:49.127 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:49.127 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.237 ms 00:19:49.127 00:19:49.127 --- 10.0.0.2 ping statistics --- 00:19:49.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.127 rtt min/avg/max/mdev = 0.237/0.237/0.237/0.000 ms 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:49.127 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:49.127 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:19:49.127 00:19:49.127 --- 10.0.0.1 ping statistics --- 00:19:49.127 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:49.127 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:49.127 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2915957 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2915957 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2915957 ']' 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:49.128 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:49.128 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.128 [2024-07-26 12:20:42.314367] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:49.128 [2024-07-26 12:20:42.314470] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:49.128 EAL: No free 2048 kB hugepages reported on node 1 00:19:49.387 [2024-07-26 12:20:42.387595] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:49.387 [2024-07-26 12:20:42.499300] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:49.387 [2024-07-26 12:20:42.499373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:49.387 [2024-07-26 12:20:42.499387] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:49.387 [2024-07-26 12:20:42.499398] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:49.387 [2024-07-26 12:20:42.499408] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:49.387 [2024-07-26 12:20:42.499493] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:49.387 [2024-07-26 12:20:42.499557] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:49.387 [2024-07-26 12:20:42.499622] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:49.387 [2024-07-26 12:20:42.499626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:49.387 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:49.387 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:49.387 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.387 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:49.387 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 [2024-07-26 12:20:42.652594] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:49.645 12:20:42 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:49.645 Malloc1 00:19:49.645 [2024-07-26 12:20:42.736932] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.645 Malloc2 00:19:49.645 Malloc3 00:19:49.645 Malloc4 00:19:49.903 Malloc5 00:19:49.903 Malloc6 00:19:49.903 Malloc7 00:19:49.903 Malloc8 00:19:49.903 Malloc9 00:19:49.903 Malloc10 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=2916140 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 2916140 /var/tmp/bdevperf.sock 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2916140 ']' 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.160 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.160 { 00:19:50.160 "params": { 00:19:50.160 "name": "Nvme$subsystem", 00:19:50.160 "trtype": "$TEST_TRANSPORT", 00:19:50.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.160 "adrfam": "ipv4", 00:19:50.160 "trsvcid": "$NVMF_PORT", 00:19:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.160 "hdgst": ${hdgst:-false}, 00:19:50.160 "ddgst": ${ddgst:-false} 00:19:50.160 }, 00:19:50.160 "method": "bdev_nvme_attach_controller" 00:19:50.160 } 00:19:50.160 EOF 00:19:50.160 )") 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.160 { 00:19:50.160 "params": { 00:19:50.160 "name": "Nvme$subsystem", 00:19:50.160 "trtype": "$TEST_TRANSPORT", 00:19:50.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.160 "adrfam": "ipv4", 00:19:50.160 "trsvcid": "$NVMF_PORT", 00:19:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.160 "hdgst": ${hdgst:-false}, 00:19:50.160 "ddgst": ${ddgst:-false} 00:19:50.160 }, 00:19:50.160 "method": "bdev_nvme_attach_controller" 00:19:50.160 } 00:19:50.160 EOF 00:19:50.160 )") 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.160 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.160 { 00:19:50.160 "params": { 00:19:50.160 "name": "Nvme$subsystem", 00:19:50.160 "trtype": "$TEST_TRANSPORT", 00:19:50.160 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.160 "adrfam": "ipv4", 00:19:50.160 "trsvcid": "$NVMF_PORT", 00:19:50.160 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.160 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.160 "hdgst": ${hdgst:-false}, 00:19:50.160 "ddgst": ${ddgst:-false} 00:19:50.160 }, 00:19:50.160 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:50.161 { 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme$subsystem", 00:19:50.161 "trtype": "$TEST_TRANSPORT", 00:19:50.161 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "$NVMF_PORT", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:50.161 "hdgst": ${hdgst:-false}, 00:19:50.161 "ddgst": ${ddgst:-false} 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 } 00:19:50.161 EOF 00:19:50.161 )") 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:19:50.161 12:20:43 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme1", 00:19:50.161 "trtype": "tcp", 00:19:50.161 "traddr": "10.0.0.2", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "4420", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:50.161 "hdgst": false, 00:19:50.161 "ddgst": false 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 },{ 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme2", 00:19:50.161 "trtype": "tcp", 00:19:50.161 "traddr": "10.0.0.2", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "4420", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:50.161 "hdgst": false, 00:19:50.161 "ddgst": false 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 },{ 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme3", 00:19:50.161 "trtype": "tcp", 00:19:50.161 "traddr": "10.0.0.2", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "4420", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:50.161 "hdgst": false, 00:19:50.161 "ddgst": false 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 },{ 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme4", 00:19:50.161 "trtype": "tcp", 00:19:50.161 "traddr": "10.0.0.2", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "4420", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:50.161 "hdgst": false, 00:19:50.161 "ddgst": false 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.161 },{ 00:19:50.161 "params": { 00:19:50.161 "name": "Nvme5", 00:19:50.161 "trtype": "tcp", 00:19:50.161 "traddr": "10.0.0.2", 00:19:50.161 "adrfam": "ipv4", 00:19:50.161 "trsvcid": "4420", 00:19:50.161 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:50.161 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:50.161 "hdgst": false, 00:19:50.161 "ddgst": false 00:19:50.161 }, 00:19:50.161 "method": "bdev_nvme_attach_controller" 00:19:50.162 },{ 00:19:50.162 "params": { 00:19:50.162 "name": "Nvme6", 00:19:50.162 "trtype": "tcp", 00:19:50.162 "traddr": "10.0.0.2", 00:19:50.162 "adrfam": "ipv4", 00:19:50.162 "trsvcid": "4420", 00:19:50.162 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:50.162 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:50.162 "hdgst": false, 00:19:50.162 "ddgst": false 00:19:50.162 }, 00:19:50.162 "method": "bdev_nvme_attach_controller" 00:19:50.162 },{ 00:19:50.162 "params": { 00:19:50.162 "name": "Nvme7", 00:19:50.162 "trtype": "tcp", 00:19:50.162 "traddr": "10.0.0.2", 00:19:50.162 "adrfam": "ipv4", 00:19:50.162 "trsvcid": "4420", 00:19:50.162 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:50.162 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:50.162 "hdgst": false, 00:19:50.162 "ddgst": false 00:19:50.162 }, 00:19:50.162 "method": "bdev_nvme_attach_controller" 00:19:50.162 },{ 00:19:50.162 "params": { 00:19:50.162 "name": "Nvme8", 00:19:50.162 "trtype": "tcp", 00:19:50.162 "traddr": "10.0.0.2", 00:19:50.162 "adrfam": "ipv4", 00:19:50.162 "trsvcid": "4420", 00:19:50.162 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:50.162 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:50.162 "hdgst": false, 00:19:50.162 "ddgst": false 00:19:50.162 }, 00:19:50.162 "method": "bdev_nvme_attach_controller" 00:19:50.162 },{ 00:19:50.162 "params": { 00:19:50.162 "name": "Nvme9", 00:19:50.162 "trtype": "tcp", 00:19:50.162 "traddr": "10.0.0.2", 00:19:50.162 "adrfam": "ipv4", 00:19:50.162 "trsvcid": "4420", 00:19:50.162 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:50.162 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:50.162 "hdgst": false, 00:19:50.162 "ddgst": false 00:19:50.162 }, 00:19:50.162 "method": "bdev_nvme_attach_controller" 00:19:50.162 },{ 00:19:50.162 "params": { 00:19:50.162 "name": "Nvme10", 00:19:50.162 "trtype": "tcp", 00:19:50.162 "traddr": "10.0.0.2", 00:19:50.162 "adrfam": "ipv4", 00:19:50.162 "trsvcid": "4420", 00:19:50.162 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:50.162 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:50.162 "hdgst": false, 00:19:50.162 "ddgst": false 00:19:50.162 }, 00:19:50.162 "method": "bdev_nvme_attach_controller" 00:19:50.162 }' 00:19:50.162 [2024-07-26 12:20:43.238642] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:50.162 [2024-07-26 12:20:43.238718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2916140 ] 00:19:50.162 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.162 [2024-07-26 12:20:43.303043] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.162 [2024-07-26 12:20:43.412336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:52.113 Running I/O for 10 seconds... 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # return 0 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:19:52.113 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:19:52.372 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=195 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 2916140 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2916140 ']' 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2916140 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:52.631 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2916140 00:19:52.889 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:19:52.889 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:19:52.889 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2916140' 00:19:52.889 killing process with pid 2916140 00:19:52.889 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2916140 00:19:52.889 12:20:45 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2916140 00:19:52.889 Received shutdown signal, test time was about 1.098688 seconds 00:19:52.889 00:19:52.889 Latency(us) 00:19:52.889 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:52.889 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme1n1 : 1.09 234.80 14.67 0.00 0.00 269802.19 21262.79 262532.36 00:19:52.889 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme2n1 : 1.05 243.90 15.24 0.00 0.00 253298.16 22233.69 243891.01 00:19:52.889 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme3n1 : 1.08 237.17 14.82 0.00 0.00 257892.88 25243.50 284280.60 00:19:52.889 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme4n1 : 1.06 241.66 15.10 0.00 0.00 248306.92 16990.81 254765.13 00:19:52.889 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme5n1 : 1.10 233.18 14.57 0.00 0.00 253341.39 25243.50 254765.13 00:19:52.889 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme6n1 : 1.07 238.47 14.90 0.00 0.00 242694.26 17670.45 234570.33 00:19:52.889 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme7n1 : 1.09 233.97 14.62 0.00 0.00 243163.78 22913.33 271853.04 00:19:52.889 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme8n1 : 1.07 243.78 15.24 0.00 0.00 227640.88 3519.53 251658.24 00:19:52.889 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme9n1 : 1.05 183.13 11.45 0.00 0.00 296940.28 29515.47 288940.94 00:19:52.889 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:19:52.889 Verification LBA range: start 0x0 length 0x400 00:19:52.889 Nvme10n1 : 1.09 235.83 14.74 0.00 0.00 227563.71 21068.61 237677.23 00:19:52.889 =================================================================================================================== 00:19:52.889 Total : 2325.89 145.37 0.00 0.00 250876.55 3519.53 288940.94 00:19:53.147 12:20:46 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 2915957 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:19:54.080 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:19:54.080 rmmod nvme_tcp 00:19:54.080 rmmod nvme_fabrics 00:19:54.080 rmmod nvme_keyring 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 2915957 ']' 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 2915957 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@950 -- # '[' -z 2915957 ']' 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # kill -0 2915957 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # uname 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2915957 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2915957' 00:19:54.339 killing process with pid 2915957 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@969 -- # kill 2915957 00:19:54.339 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@974 -- # wait 2915957 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:54.909 12:20:47 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:19:56.809 00:19:56.809 real 0m7.898s 00:19:56.809 user 0m23.761s 00:19:56.809 sys 0m1.601s 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:19:56.809 ************************************ 00:19:56.809 END TEST nvmf_shutdown_tc2 00:19:56.809 ************************************ 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1107 -- # xtrace_disable 00:19:56.809 12:20:49 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:19:56.809 ************************************ 00:19:56.809 START TEST nvmf_shutdown_tc3 00:19:56.809 ************************************ 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1125 -- # nvmf_shutdown_tc3 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:56.809 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:19:56.810 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:19:56.810 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:19:56.810 Found net devices under 0000:0a:00.0: cvl_0_0 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:19:56.810 Found net devices under 0000:0a:00.1: cvl_0_1 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:19:56.810 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:19:57.068 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:57.068 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:19:57.068 00:19:57.068 --- 10.0.0.2 ping statistics --- 00:19:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.068 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:19:57.068 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:57.068 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.119 ms 00:19:57.068 00:19:57.068 --- 10.0.0.1 ping statistics --- 00:19:57.068 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:57.068 rtt min/avg/max/mdev = 0.119/0.119/0.119/0.000 ms 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=2917058 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 2917058 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2917058 ']' 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:57.068 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.068 [2024-07-26 12:20:50.223314] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:57.068 [2024-07-26 12:20:50.223418] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.068 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.068 [2024-07-26 12:20:50.286319] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:19:57.326 [2024-07-26 12:20:50.392941] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.326 [2024-07-26 12:20:50.392993] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.326 [2024-07-26 12:20:50.393014] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.326 [2024-07-26 12:20:50.393025] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.326 [2024-07-26 12:20:50.393048] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.326 [2024-07-26 12:20:50.393150] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:57.326 [2024-07-26 12:20:50.393213] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:19:57.326 [2024-07-26 12:20:50.393263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:19:57.326 [2024-07-26 12:20:50.393265] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.326 [2024-07-26 12:20:50.544512] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@724 -- # xtrace_disable 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.326 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:19:57.586 12:20:50 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:57.586 Malloc1 00:19:57.586 [2024-07-26 12:20:50.634053] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:57.586 Malloc2 00:19:57.586 Malloc3 00:19:57.586 Malloc4 00:19:57.586 Malloc5 00:19:57.844 Malloc6 00:19:57.844 Malloc7 00:19:57.844 Malloc8 00:19:57.844 Malloc9 00:19:57.844 Malloc10 00:19:57.844 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:19:57.844 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:19:57.844 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@730 -- # xtrace_disable 00:19:57.844 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=2917236 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 2917236 /var/tmp/bdevperf.sock 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@831 -- # '[' -z 2917236 ']' 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # local max_retries=100 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.110 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # xtrace_disable 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.110 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.110 { 00:19:58.110 "params": { 00:19:58.110 "name": "Nvme$subsystem", 00:19:58.110 "trtype": "$TEST_TRANSPORT", 00:19:58.110 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.110 "adrfam": "ipv4", 00:19:58.110 "trsvcid": "$NVMF_PORT", 00:19:58.110 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.110 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.110 "hdgst": ${hdgst:-false}, 00:19:58.110 "ddgst": ${ddgst:-false} 00:19:58.110 }, 00:19:58.110 "method": "bdev_nvme_attach_controller" 00:19:58.110 } 00:19:58.110 EOF 00:19:58.110 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.111 )") 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.111 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.111 { 00:19:58.111 "params": { 00:19:58.111 "name": "Nvme$subsystem", 00:19:58.111 "trtype": "$TEST_TRANSPORT", 00:19:58.111 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.111 "adrfam": "ipv4", 00:19:58.111 "trsvcid": "$NVMF_PORT", 00:19:58.111 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.111 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.111 "hdgst": ${hdgst:-false}, 00:19:58.111 "ddgst": ${ddgst:-false} 00:19:58.111 }, 00:19:58.111 "method": "bdev_nvme_attach_controller" 00:19:58.111 } 00:19:58.111 EOF 00:19:58.112 )") 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:19:58.112 { 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme$subsystem", 00:19:58.112 "trtype": "$TEST_TRANSPORT", 00:19:58.112 "traddr": "$NVMF_FIRST_TARGET_IP", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "$NVMF_PORT", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:19:58.112 "hdgst": ${hdgst:-false}, 00:19:58.112 "ddgst": ${ddgst:-false} 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 } 00:19:58.112 EOF 00:19:58.112 )") 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:19:58.112 12:20:51 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme1", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme2", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme3", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme4", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme5", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme6", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme7", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme8", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme9", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 },{ 00:19:58.112 "params": { 00:19:58.112 "name": "Nvme10", 00:19:58.112 "trtype": "tcp", 00:19:58.112 "traddr": "10.0.0.2", 00:19:58.112 "adrfam": "ipv4", 00:19:58.112 "trsvcid": "4420", 00:19:58.112 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:19:58.112 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:19:58.112 "hdgst": false, 00:19:58.112 "ddgst": false 00:19:58.112 }, 00:19:58.112 "method": "bdev_nvme_attach_controller" 00:19:58.112 }' 00:19:58.112 [2024-07-26 12:20:51.145376] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:19:58.112 [2024-07-26 12:20:51.145472] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2917236 ] 00:19:58.112 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.112 [2024-07-26 12:20:51.207740] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.112 [2024-07-26 12:20:51.317486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:00.024 Running I/O for 10 seconds... 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # return 0 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=131 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 2917058 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@950 -- # '[' -z 2917058 ']' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # kill -0 2917058 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # uname 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2917058 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2917058' 00:20:00.973 killing process with pid 2917058 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@969 -- # kill 2917058 00:20:00.973 12:20:53 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@974 -- # wait 2917058 00:20:00.973 [2024-07-26 12:20:53.986953] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987133] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987183] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987209] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987234] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987447] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987473] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987485] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987497] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987548] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987560] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987601] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.973 [2024-07-26 12:20:53.987659] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987680] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987753] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987867] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987911] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987925] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987937] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987951] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987964] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.987990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988119] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.988212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990920 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997171] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997261] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997335] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997428] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.974 [2024-07-26 12:20:53.997487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997524] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997539] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997552] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997575] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997599] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997707] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997720] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997744] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997767] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997790] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997802] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997814] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997825] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997862] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997875] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997887] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.997934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x993440 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999513] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999643] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999679] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999692] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999704] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999740] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999754] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.975 [2024-07-26 12:20:53.999797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999810] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999834] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999847] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999934] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999946] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999971] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:53.999995] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000045] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000147] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000219] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000257] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000268] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.000340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x990de0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002123] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002175] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002280] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002305] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002329] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.976 [2024-07-26 12:20:54.002342] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002456] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002468] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002506] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002518] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002625] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002638] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002662] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002674] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002686] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002698] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002734] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002794] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002806] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002818] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002830] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002888] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.002900] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9912a0 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005703] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005717] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005758] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005769] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005781] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005793] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005805] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005817] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005829] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.977 [2024-07-26 12:20:54.005841] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005853] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005877] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005889] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005901] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005913] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005926] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005938] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005949] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005974] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005987] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.005999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006012] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006024] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006037] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006087] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006165] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006177] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006213] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006295] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006410] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006422] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.006471] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992120 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007827] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007842] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007855] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007868] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007881] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007894] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007907] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.978 [2024-07-26 12:20:54.007945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.007958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.007972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.007985] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.007998] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008011] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008023] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008036] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008050] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008071] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008086] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008121] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008163] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008189] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008201] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008239] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008251] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008264] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008313] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008388] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008400] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008484] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.979 [2024-07-26 12:20:54.008523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.008631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x9925e0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009695] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009728] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009743] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009784] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009797] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009822] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009885] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009922] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009961] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.009990] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010027] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010082] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010096] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010168] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010207] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010245] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.980 [2024-07-26 12:20:54.010451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010486] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010559] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010571] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010583] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.010595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992ac0 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011362] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011436] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011448] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011479] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011492] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011530] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011542] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011554] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011592] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011605] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011617] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011682] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011706] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011719] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011732] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011757] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011770] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011782] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011808] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011836] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011849] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011861] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011873] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011886] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011898] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011910] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011923] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011935] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011947] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.981 [2024-07-26 12:20:54.011959] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.011972] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.011984] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.011996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012008] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012035] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012047] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012117] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012166] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.012191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x992f80 is same with the state(5) to be set 00:20:00.982 [2024-07-26 12:20:54.016493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.016969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.016985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.017000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.017016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.017030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.017046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.017069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.017088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.017103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.017122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.017136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.017152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.982 [2024-07-26 12:20:54.017166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.982 [2024-07-26 12:20:54.017182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.017983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.017998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.018012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.018028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.018042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.983 [2024-07-26 12:20:54.018065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.983 [2024-07-26 12:20:54.018081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.984 [2024-07-26 12:20:54.018546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.018600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:20:00.984 [2024-07-26 12:20:54.018701] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d48cc0 was disconnected and freed. reset controller. 00:20:00.984 [2024-07-26 12:20:54.019220] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019263] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019292] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019321] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bf610 is same with the state(5) to be set 00:20:00.984 [2024-07-26 12:20:54.019413] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019449] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019505] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019532] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6e730 is same with the state(5) to be set 00:20:00.984 [2024-07-26 12:20:54.019580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.984 [2024-07-26 12:20:54.019650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.984 [2024-07-26 12:20:54.019663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019677] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca32e0 is same with the state(5) to be set 00:20:00.985 [2024-07-26 12:20:54.019750] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019785] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019844] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019872] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88950 is same with the state(5) to be set 00:20:00.985 [2024-07-26 12:20:54.019920] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019957] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.019985] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.019998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc360 is same with the state(5) to be set 00:20:00.985 [2024-07-26 12:20:54.020092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020140] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020168] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020196] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be03a0 is same with the state(5) to be set 00:20:00.985 [2024-07-26 12:20:54.020269] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020371] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bedf00 is same with the state(5) to be set 00:20:00.985 [2024-07-26 12:20:54.020438] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.985 [2024-07-26 12:20:54.020488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.985 [2024-07-26 12:20:54.020502] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020530] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bed4a0 is same with the state(5) to be set 00:20:00.986 [2024-07-26 12:20:54.020609] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020646] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020676] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020704] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020731] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbd830 is same with the state(5) to be set 00:20:00.986 [2024-07-26 12:20:54.020775] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020869] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:00.986 [2024-07-26 12:20:54.020883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.020896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1b50 is same with the state(5) to be set 00:20:00.986 [2024-07-26 12:20:54.021185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.986 [2024-07-26 12:20:54.021729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.986 [2024-07-26 12:20:54.021746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.021982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.021999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.987 [2024-07-26 12:20:54.022629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.987 [2024-07-26 12:20:54.022645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.022979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.022994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.023242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.023257] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d47cd0 is same with the state(5) to be set 00:20:00.988 [2024-07-26 12:20:54.023330] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d47cd0 was disconnected and freed. reset controller. 00:20:00.988 [2024-07-26 12:20:54.024496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.988 [2024-07-26 12:20:54.024752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.988 [2024-07-26 12:20:54.024767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.024977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.024993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.025007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.025023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.025036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.025057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.025083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.989 [2024-07-26 12:20:54.036943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.989 [2024-07-26 12:20:54.036959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.036973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.036988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.990 [2024-07-26 12:20:54.037826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.990 [2024-07-26 12:20:54.037989] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1d4a1c0 was disconnected and freed. reset controller. 00:20:00.990 [2024-07-26 12:20:54.038425] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:00.991 [2024-07-26 12:20:54.038480] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bedf00 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bf610 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038587] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6e730 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca32e0 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88950 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038660] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc360 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038689] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be03a0 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bed4a0 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038740] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbd830 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.038767] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1b50 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.042279] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:00.991 [2024-07-26 12:20:54.042319] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:00.991 [2024-07-26 12:20:54.042644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.991 [2024-07-26 12:20:54.042675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bedf00 with addr=10.0.0.2, port=4420 00:20:00.991 [2024-07-26 12:20:54.042693] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bedf00 is same with the state(5) to be set 00:20:00.991 [2024-07-26 12:20:54.043467] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:00.991 [2024-07-26 12:20:54.043544] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:00.991 [2024-07-26 12:20:54.043614] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:00.991 [2024-07-26 12:20:54.043683] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:00.991 [2024-07-26 12:20:54.043852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.991 [2024-07-26 12:20:54.043879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be03a0 with addr=10.0.0.2, port=4420 00:20:00.991 [2024-07-26 12:20:54.043896] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be03a0 is same with the state(5) to be set 00:20:00.991 [2024-07-26 12:20:54.044032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.991 [2024-07-26 12:20:54.044057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed4a0 with addr=10.0.0.2, port=4420 00:20:00.991 [2024-07-26 12:20:54.044081] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bed4a0 is same with the state(5) to be set 00:20:00.991 [2024-07-26 12:20:54.044110] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bedf00 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.044211] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:00.991 [2024-07-26 12:20:54.044280] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:20:00.991 [2024-07-26 12:20:54.044405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be03a0 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.044433] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bed4a0 (9): Bad file descriptor 00:20:00.991 [2024-07-26 12:20:54.044451] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:00.991 [2024-07-26 12:20:54.044465] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:00.991 [2024-07-26 12:20:54.044482] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:00.991 [2024-07-26 12:20:54.044553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.991 [2024-07-26 12:20:54.044980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.991 [2024-07-26 12:20:54.044994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.992 [2024-07-26 12:20:54.045636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.992 [2024-07-26 12:20:54.045650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.045979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.045994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.993 [2024-07-26 12:20:54.046521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.993 [2024-07-26 12:20:54.046535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.046552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.046567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.046583] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c77610 is same with the state(5) to be set 00:20:00.994 [2024-07-26 12:20:54.046668] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1c77610 was disconnected and freed. reset controller. 00:20:00.994 [2024-07-26 12:20:54.046761] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.994 [2024-07-26 12:20:54.046789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:00.994 [2024-07-26 12:20:54.046805] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:00.994 [2024-07-26 12:20:54.046819] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:00.994 [2024-07-26 12:20:54.046838] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:00.994 [2024-07-26 12:20:54.046853] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:00.994 [2024-07-26 12:20:54.046867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:00.994 [2024-07-26 12:20:54.048048] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.994 [2024-07-26 12:20:54.048078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:00.994 [2024-07-26 12:20:54.048104] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:00.994 [2024-07-26 12:20:54.048339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:00.994 [2024-07-26 12:20:54.048369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbd830 with addr=10.0.0.2, port=4420 00:20:00.994 [2024-07-26 12:20:54.048387] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbd830 is same with the state(5) to be set 00:20:00.994 [2024-07-26 12:20:54.048715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbd830 (9): Bad file descriptor 00:20:00.994 [2024-07-26 12:20:54.048861] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:00.994 [2024-07-26 12:20:54.048884] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:00.994 [2024-07-26 12:20:54.048898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:00.994 [2024-07-26 12:20:54.048961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.048983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.994 [2024-07-26 12:20:54.049557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.994 [2024-07-26 12:20:54.049571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.049974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.049990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.995 [2024-07-26 12:20:54.050432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.995 [2024-07-26 12:20:54.050446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.050968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.050982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.051000] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c789a0 is same with the state(5) to be set 00:20:00.996 [2024-07-26 12:20:54.052257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.996 [2024-07-26 12:20:54.052549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.996 [2024-07-26 12:20:54.052563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.052985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.052999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.997 [2024-07-26 12:20:54.053437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.997 [2024-07-26 12:20:54.053452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.053974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.053989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.054004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.054019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.054033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.054049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.054081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.054099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.054113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.998 [2024-07-26 12:20:54.054129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.998 [2024-07-26 12:20:54.054143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.054159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.054173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.054189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.054208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.054228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.054243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.054259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.054273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.054287] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c79d90 is same with the state(5) to be set 00:20:00.999 [2024-07-26 12:20:54.055542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.055975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.055992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:00.999 [2024-07-26 12:20:54.056266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:00.999 [2024-07-26 12:20:54.056283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.056314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.056345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.056375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.056406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.056436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.056467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.056481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.063971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.063986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.000 [2024-07-26 12:20:54.064444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.000 [2024-07-26 12:20:54.064458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.064824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.064839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bb9010 is same with the state(5) to be set 00:20:01.001 [2024-07-26 12:20:54.066211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.001 [2024-07-26 12:20:54.066688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.001 [2024-07-26 12:20:54.066704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.066969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.066985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.002 [2024-07-26 12:20:54.067558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.002 [2024-07-26 12:20:54.067574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.067978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.067992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.068215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.068230] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4b0b0 is same with the state(5) to be set 00:20:01.003 [2024-07-26 12:20:54.069469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.003 [2024-07-26 12:20:54.069687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.003 [2024-07-26 12:20:54.069703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.069977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.069991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.004 [2024-07-26 12:20:54.070359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.004 [2024-07-26 12:20:54.070373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.070971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.070985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.005 [2024-07-26 12:20:54.071204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.005 [2024-07-26 12:20:54.071220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.071448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.071463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d4c450 is same with the state(5) to be set 00:20:01.006 [2024-07-26 12:20:54.073102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.006 [2024-07-26 12:20:54.073712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.006 [2024-07-26 12:20:54.073727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.073984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.073999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.007 [2024-07-26 12:20:54.074588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.007 [2024-07-26 12:20:54.074602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.074977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.074993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.075007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.075023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.075037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.075053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.075074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.075091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:01.008 [2024-07-26 12:20:54.075106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:01.008 [2024-07-26 12:20:54.075121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca38b0 is same with the state(5) to be set 00:20:01.008 [2024-07-26 12:20:54.076696] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.008 [2024-07-26 12:20:54.076728] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:20:01.008 [2024-07-26 12:20:54.076752] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:20:01.008 [2024-07-26 12:20:54.076772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:20:01.008 [2024-07-26 12:20:54.076894] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.008 [2024-07-26 12:20:54.076920] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.008 [2024-07-26 12:20:54.076942] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.008 [2024-07-26 12:20:54.077046] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:20:01.008 [2024-07-26 12:20:54.077080] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:20:01.008 task offset: 17408 on job bdev=Nvme5n1 fails 00:20:01.008 00:20:01.008 Latency(us) 00:20:01.008 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:01.008 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme1n1 ended in about 0.89 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme1n1 : 0.89 143.78 8.99 71.89 0.00 293286.49 24369.68 315349.52 00:20:01.008 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme2n1 ended in about 0.89 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme2n1 : 0.89 143.12 8.94 71.56 0.00 288602.14 23301.69 310689.19 00:20:01.008 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme3n1 ended in about 0.90 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme3n1 : 0.90 142.60 8.91 71.30 0.00 283560.52 24466.77 299815.06 00:20:01.008 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme4n1 ended in about 0.88 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme4n1 : 0.88 145.06 9.07 72.53 0.00 272392.09 23398.78 316902.97 00:20:01.008 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme5n1 ended in about 0.87 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme5n1 : 0.87 147.68 9.23 73.84 0.00 261048.32 7184.69 318456.41 00:20:01.008 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme6n1 ended in about 0.88 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme6n1 : 0.88 144.88 9.05 72.44 0.00 260500.54 21554.06 315349.52 00:20:01.008 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.008 Job: Nvme7n1 ended in about 0.91 seconds with error 00:20:01.008 Verification LBA range: start 0x0 length 0x400 00:20:01.008 Nvme7n1 : 0.91 140.94 8.81 70.47 0.00 262556.63 43690.67 292047.83 00:20:01.009 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.009 Job: Nvme8n1 ended in about 0.91 seconds with error 00:20:01.009 Verification LBA range: start 0x0 length 0x400 00:20:01.009 Nvme8n1 : 0.91 140.42 8.78 70.21 0.00 257463.94 42137.22 315349.52 00:20:01.009 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.009 Job: Nvme9n1 ended in about 0.91 seconds with error 00:20:01.009 Verification LBA range: start 0x0 length 0x400 00:20:01.009 Nvme9n1 : 0.91 69.96 4.37 69.96 0.00 379105.28 27767.85 355739.12 00:20:01.009 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:20:01.009 Job: Nvme10n1 ended in about 0.92 seconds with error 00:20:01.009 Verification LBA range: start 0x0 length 0x400 00:20:01.009 Nvme10n1 : 0.92 69.69 4.36 69.69 0.00 372220.21 23301.69 366613.24 00:20:01.009 =================================================================================================================== 00:20:01.009 Total : 1288.13 80.51 713.89 0.00 287174.39 7184.69 366613.24 00:20:01.009 [2024-07-26 12:20:54.103151] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:20:01.009 [2024-07-26 12:20:54.103235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:20:01.009 [2024-07-26 12:20:54.103608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.103646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be1b50 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.103680] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be1b50 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.103815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.103842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bcc360 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.103859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bcc360 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.103986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.104013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ca32e0 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.104029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ca32e0 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.105772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:20:01.009 [2024-07-26 12:20:54.105803] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:20:01.009 [2024-07-26 12:20:54.105821] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:20:01.009 [2024-07-26 12:20:54.105997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.106026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d88950 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.106043] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d88950 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.106163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.106190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x16bf610 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.106207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16bf610 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.106318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.106346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1d6e730 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.106363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d6e730 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.106391] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be1b50 (9): Bad file descriptor 00:20:01.009 [2024-07-26 12:20:54.106416] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bcc360 (9): Bad file descriptor 00:20:01.009 [2024-07-26 12:20:54.106435] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ca32e0 (9): Bad file descriptor 00:20:01.009 [2024-07-26 12:20:54.106511] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.009 [2024-07-26 12:20:54.106540] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.009 [2024-07-26 12:20:54.106563] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.009 [2024-07-26 12:20:54.106584] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:20:01.009 [2024-07-26 12:20:54.106658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:20:01.009 [2024-07-26 12:20:54.106827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.106856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bedf00 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.106873] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bedf00 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.107011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.107039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bed4a0 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.107055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bed4a0 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.107185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.107212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1be03a0 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.107229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1be03a0 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.107248] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d88950 (9): Bad file descriptor 00:20:01.009 [2024-07-26 12:20:54.107268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x16bf610 (9): Bad file descriptor 00:20:01.009 [2024-07-26 12:20:54.107287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d6e730 (9): Bad file descriptor 00:20:01.009 [2024-07-26 12:20:54.107305] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:20:01.009 [2024-07-26 12:20:54.107318] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:20:01.009 [2024-07-26 12:20:54.107335] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:20:01.009 [2024-07-26 12:20:54.107355] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:20:01.009 [2024-07-26 12:20:54.107370] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:20:01.009 [2024-07-26 12:20:54.107384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:20:01.009 [2024-07-26 12:20:54.107401] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:20:01.009 [2024-07-26 12:20:54.107415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:20:01.009 [2024-07-26 12:20:54.107428] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:20:01.009 [2024-07-26 12:20:54.107519] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.009 [2024-07-26 12:20:54.107540] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.009 [2024-07-26 12:20:54.107552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.009 [2024-07-26 12:20:54.107671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:20:01.009 [2024-07-26 12:20:54.107698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1bbd830 with addr=10.0.0.2, port=4420 00:20:01.009 [2024-07-26 12:20:54.107714] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1bbd830 is same with the state(5) to be set 00:20:01.009 [2024-07-26 12:20:54.107733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bedf00 (9): Bad file descriptor 00:20:01.010 [2024-07-26 12:20:54.107754] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bed4a0 (9): Bad file descriptor 00:20:01.010 [2024-07-26 12:20:54.107773] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1be03a0 (9): Bad file descriptor 00:20:01.010 [2024-07-26 12:20:54.107789] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.107802] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.107821] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:20:01.010 [2024-07-26 12:20:54.107839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.107854] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.107868] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:20:01.010 [2024-07-26 12:20:54.107883] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.107897] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.107911] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:20:01.010 [2024-07-26 12:20:54.107951] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.010 [2024-07-26 12:20:54.107969] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.010 [2024-07-26 12:20:54.107981] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.010 [2024-07-26 12:20:54.107997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1bbd830 (9): Bad file descriptor 00:20:01.010 [2024-07-26 12:20:54.108015] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.108028] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.108042] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:20:01.010 [2024-07-26 12:20:54.108067] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.108084] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.108098] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:20:01.010 [2024-07-26 12:20:54.108114] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.108128] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.108142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:20:01.010 [2024-07-26 12:20:54.108181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.010 [2024-07-26 12:20:54.108198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.010 [2024-07-26 12:20:54.108211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.010 [2024-07-26 12:20:54.108224] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:20:01.010 [2024-07-26 12:20:54.108236] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:20:01.010 [2024-07-26 12:20:54.108250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:20:01.010 [2024-07-26 12:20:54.108286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:20:01.578 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:20:01.578 12:20:54 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 2917236 00:20:02.515 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (2917236) - No such process 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:02.515 rmmod nvme_tcp 00:20:02.515 rmmod nvme_fabrics 00:20:02.515 rmmod nvme_keyring 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:02.515 12:20:55 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:05.054 00:20:05.054 real 0m7.674s 00:20:05.054 user 0m19.291s 00:20:05.054 sys 0m1.465s 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 ************************************ 00:20:05.054 END TEST nvmf_shutdown_tc3 00:20:05.054 ************************************ 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:20:05.054 00:20:05.054 real 0m28.295s 00:20:05.054 user 1m19.379s 00:20:05.054 sys 0m6.621s 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 ************************************ 00:20:05.054 END TEST nvmf_shutdown 00:20:05.054 ************************************ 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:20:05.054 00:20:05.054 real 10m35.639s 00:20:05.054 user 25m8.859s 00:20:05.054 sys 2m38.889s 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:05.054 12:20:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 ************************************ 00:20:05.054 END TEST nvmf_target_extra 00:20:05.054 ************************************ 00:20:05.054 12:20:57 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:05.054 12:20:57 nvmf_tcp -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:05.054 12:20:57 nvmf_tcp -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.054 12:20:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:20:05.054 ************************************ 00:20:05.054 START TEST nvmf_host 00:20:05.054 ************************************ 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:20:05.054 * Looking for test storage... 00:20:05.054 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.054 12:20:57 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:05.055 ************************************ 00:20:05.055 START TEST nvmf_multicontroller 00:20:05.055 ************************************ 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:20:05.055 * Looking for test storage... 00:20:05.055 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:05.055 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:05.056 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:20:05.056 12:20:57 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.960 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:06.961 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:06.961 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:06.961 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:06.961 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:06.961 12:20:59 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:06.961 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.961 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.207 ms 00:20:06.961 00:20:06.961 --- 10.0.0.2 ping statistics --- 00:20:06.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.961 rtt min/avg/max/mdev = 0.207/0.207/0.207/0.000 ms 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:06.961 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.961 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:06.961 00:20:06.961 --- 10.0.0.1 ping statistics --- 00:20:06.961 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.961 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=2919801 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 2919801 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2919801 ']' 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.961 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:06.961 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:06.961 [2024-07-26 12:21:00.202953] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:06.962 [2024-07-26 12:21:00.203032] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:07.220 EAL: No free 2048 kB hugepages reported on node 1 00:20:07.220 [2024-07-26 12:21:00.269992] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:20:07.220 [2024-07-26 12:21:00.384618] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:07.220 [2024-07-26 12:21:00.384675] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:07.220 [2024-07-26 12:21:00.384697] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:07.220 [2024-07-26 12:21:00.384709] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:07.220 [2024-07-26 12:21:00.384718] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:07.220 [2024-07-26 12:21:00.384819] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:07.220 [2024-07-26 12:21:00.384877] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:07.220 [2024-07-26 12:21:00.384880] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:07.478 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 [2024-07-26 12:21:00.527027] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 Malloc0 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 [2024-07-26 12:21:00.598975] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 [2024-07-26 12:21:00.606869] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 Malloc1 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=2919856 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 2919856 /var/tmp/bdevperf.sock 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@831 -- # '[' -z 2919856 ']' 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:07.479 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:07.479 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.046 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:08.046 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # return 0 00:20:08.046 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:08.046 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.046 12:21:00 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.046 NVMe0n1 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.046 1 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.046 request: 00:20:08.046 { 00:20:08.046 "name": "NVMe0", 00:20:08.046 "trtype": "tcp", 00:20:08.046 "traddr": "10.0.0.2", 00:20:08.046 "adrfam": "ipv4", 00:20:08.046 "trsvcid": "4420", 00:20:08.046 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.046 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:20:08.046 "hostaddr": "10.0.0.2", 00:20:08.046 "hostsvcid": "60000", 00:20:08.046 "prchk_reftag": false, 00:20:08.046 "prchk_guard": false, 00:20:08.046 "hdgst": false, 00:20:08.046 "ddgst": false, 00:20:08.046 "method": "bdev_nvme_attach_controller", 00:20:08.046 "req_id": 1 00:20:08.046 } 00:20:08.046 Got JSON-RPC error response 00:20:08.046 response: 00:20:08.046 { 00:20:08.046 "code": -114, 00:20:08.046 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:08.046 } 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.046 request: 00:20:08.046 { 00:20:08.046 "name": "NVMe0", 00:20:08.046 "trtype": "tcp", 00:20:08.046 "traddr": "10.0.0.2", 00:20:08.046 "adrfam": "ipv4", 00:20:08.046 "trsvcid": "4420", 00:20:08.046 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:20:08.046 "hostaddr": "10.0.0.2", 00:20:08.046 "hostsvcid": "60000", 00:20:08.046 "prchk_reftag": false, 00:20:08.046 "prchk_guard": false, 00:20:08.046 "hdgst": false, 00:20:08.046 "ddgst": false, 00:20:08.046 "method": "bdev_nvme_attach_controller", 00:20:08.046 "req_id": 1 00:20:08.046 } 00:20:08.046 Got JSON-RPC error response 00:20:08.046 response: 00:20:08.046 { 00:20:08.046 "code": -114, 00:20:08.046 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:08.046 } 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.046 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.047 request: 00:20:08.047 { 00:20:08.047 "name": "NVMe0", 00:20:08.047 "trtype": "tcp", 00:20:08.047 "traddr": "10.0.0.2", 00:20:08.047 "adrfam": "ipv4", 00:20:08.047 "trsvcid": "4420", 00:20:08.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.047 "hostaddr": "10.0.0.2", 00:20:08.047 "hostsvcid": "60000", 00:20:08.047 "prchk_reftag": false, 00:20:08.047 "prchk_guard": false, 00:20:08.047 "hdgst": false, 00:20:08.047 "ddgst": false, 00:20:08.047 "multipath": "disable", 00:20:08.047 "method": "bdev_nvme_attach_controller", 00:20:08.047 "req_id": 1 00:20:08.047 } 00:20:08.047 Got JSON-RPC error response 00:20:08.047 response: 00:20:08.047 { 00:20:08.047 "code": -114, 00:20:08.047 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:20:08.047 } 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # local es=0 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.047 request: 00:20:08.047 { 00:20:08.047 "name": "NVMe0", 00:20:08.047 "trtype": "tcp", 00:20:08.047 "traddr": "10.0.0.2", 00:20:08.047 "adrfam": "ipv4", 00:20:08.047 "trsvcid": "4420", 00:20:08.047 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:20:08.047 "hostaddr": "10.0.0.2", 00:20:08.047 "hostsvcid": "60000", 00:20:08.047 "prchk_reftag": false, 00:20:08.047 "prchk_guard": false, 00:20:08.047 "hdgst": false, 00:20:08.047 "ddgst": false, 00:20:08.047 "multipath": "failover", 00:20:08.047 "method": "bdev_nvme_attach_controller", 00:20:08.047 "req_id": 1 00:20:08.047 } 00:20:08.047 Got JSON-RPC error response 00:20:08.047 response: 00:20:08.047 { 00:20:08.047 "code": -114, 00:20:08.047 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:20:08.047 } 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@653 -- # es=1 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.047 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.305 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.305 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:20:08.305 12:21:01 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:09.679 0 00:20:09.679 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:20:09.679 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 2919856 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2919856 ']' 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2919856 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2919856 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2919856' 00:20:09.680 killing process with pid 2919856 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2919856 00:20:09.680 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2919856 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:20:09.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:09.938 [2024-07-26 12:21:00.709550] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:09.938 [2024-07-26 12:21:00.709640] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2919856 ] 00:20:09.938 EAL: No free 2048 kB hugepages reported on node 1 00:20:09.938 [2024-07-26 12:21:00.774170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.938 [2024-07-26 12:21:00.887462] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:09.938 [2024-07-26 12:21:01.519007] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 1c7fc2c6-3a01-4002-9146-b93fc4ddfb88 already exists 00:20:09.938 [2024-07-26 12:21:01.519070] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:1c7fc2c6-3a01-4002-9146-b93fc4ddfb88 alias for bdev NVMe1n1 00:20:09.938 [2024-07-26 12:21:01.519088] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:20:09.938 Running I/O for 1 seconds... 00:20:09.938 00:20:09.938 Latency(us) 00:20:09.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.938 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:20:09.938 NVMe0n1 : 1.01 18671.05 72.93 0.00 0.00 6845.37 4150.61 14563.56 00:20:09.938 =================================================================================================================== 00:20:09.938 Total : 18671.05 72.93 0.00 0.00 6845.37 4150.61 14563.56 00:20:09.938 Received shutdown signal, test time was about 1.000000 seconds 00:20:09.938 00:20:09.938 Latency(us) 00:20:09.938 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:09.938 =================================================================================================================== 00:20:09.938 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:09.938 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:09.938 12:21:02 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:09.938 rmmod nvme_tcp 00:20:09.938 rmmod nvme_fabrics 00:20:09.938 rmmod nvme_keyring 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 2919801 ']' 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 2919801 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@950 -- # '[' -z 2919801 ']' 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # kill -0 2919801 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # uname 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2919801 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2919801' 00:20:09.938 killing process with pid 2919801 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@969 -- # kill 2919801 00:20:09.938 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@974 -- # wait 2919801 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:10.199 12:21:03 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:12.730 00:20:12.730 real 0m7.575s 00:20:12.730 user 0m11.715s 00:20:12.730 sys 0m2.367s 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:20:12.730 ************************************ 00:20:12.730 END TEST nvmf_multicontroller 00:20:12.730 ************************************ 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:12.730 ************************************ 00:20:12.730 START TEST nvmf_aer 00:20:12.730 ************************************ 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:20:12.730 * Looking for test storage... 00:20:12.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.730 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:20:12.731 12:21:05 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.632 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:14.633 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:14.633 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:14.633 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:14.633 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:14.633 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:14.633 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.161 ms 00:20:14.633 00:20:14.633 --- 10.0.0.2 ping statistics --- 00:20:14.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.633 rtt min/avg/max/mdev = 0.161/0.161/0.161/0.000 ms 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:14.633 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:14.633 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.114 ms 00:20:14.633 00:20:14.633 --- 10.0.0.1 ping statistics --- 00:20:14.633 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:14.633 rtt min/avg/max/mdev = 0.114/0.114/0.114/0.000 ms 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:14.633 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=2922362 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 2922362 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@831 -- # '[' -z 2922362 ']' 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:14.634 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:14.634 12:21:07 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.634 [2024-07-26 12:21:07.712416] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:14.634 [2024-07-26 12:21:07.712490] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:14.634 EAL: No free 2048 kB hugepages reported on node 1 00:20:14.634 [2024-07-26 12:21:07.780277] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:14.894 [2024-07-26 12:21:07.904312] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:14.894 [2024-07-26 12:21:07.904386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:14.894 [2024-07-26 12:21:07.904403] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:14.894 [2024-07-26 12:21:07.904417] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:14.894 [2024-07-26 12:21:07.904438] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:14.894 [2024-07-26 12:21:07.904490] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:14.894 [2024-07-26 12:21:07.904539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:14.894 [2024-07-26 12:21:07.904688] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:14.894 [2024-07-26 12:21:07.904691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # return 0 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.894 [2024-07-26 12:21:08.062599] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.894 Malloc0 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.894 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.895 [2024-07-26 12:21:08.114803] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:14.895 [ 00:20:14.895 { 00:20:14.895 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:14.895 "subtype": "Discovery", 00:20:14.895 "listen_addresses": [], 00:20:14.895 "allow_any_host": true, 00:20:14.895 "hosts": [] 00:20:14.895 }, 00:20:14.895 { 00:20:14.895 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:14.895 "subtype": "NVMe", 00:20:14.895 "listen_addresses": [ 00:20:14.895 { 00:20:14.895 "trtype": "TCP", 00:20:14.895 "adrfam": "IPv4", 00:20:14.895 "traddr": "10.0.0.2", 00:20:14.895 "trsvcid": "4420" 00:20:14.895 } 00:20:14.895 ], 00:20:14.895 "allow_any_host": true, 00:20:14.895 "hosts": [], 00:20:14.895 "serial_number": "SPDK00000000000001", 00:20:14.895 "model_number": "SPDK bdev Controller", 00:20:14.895 "max_namespaces": 2, 00:20:14.895 "min_cntlid": 1, 00:20:14.895 "max_cntlid": 65519, 00:20:14.895 "namespaces": [ 00:20:14.895 { 00:20:14.895 "nsid": 1, 00:20:14.895 "bdev_name": "Malloc0", 00:20:14.895 "name": "Malloc0", 00:20:14.895 "nguid": "8A81B609721A4A338C3C30F072FC79E1", 00:20:14.895 "uuid": "8a81b609-721a-4a33-8c3c-30f072fc79e1" 00:20:14.895 } 00:20:14.895 ] 00:20:14.895 } 00:20:14.895 ] 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=2922635 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:20:14.895 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:15.183 EAL: No free 2048 kB hugepages reported on node 1 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.183 Malloc1 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.183 [ 00:20:15.183 { 00:20:15.183 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:15.183 "subtype": "Discovery", 00:20:15.183 "listen_addresses": [], 00:20:15.183 "allow_any_host": true, 00:20:15.183 "hosts": [] 00:20:15.183 }, 00:20:15.183 { 00:20:15.183 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:15.183 "subtype": "NVMe", 00:20:15.183 "listen_addresses": [ 00:20:15.183 { 00:20:15.183 "trtype": "TCP", 00:20:15.183 "adrfam": "IPv4", 00:20:15.183 "traddr": "10.0.0.2", 00:20:15.183 "trsvcid": "4420" 00:20:15.183 } 00:20:15.183 ], 00:20:15.183 "allow_any_host": true, 00:20:15.183 "hosts": [], 00:20:15.183 "serial_number": "SPDK00000000000001", 00:20:15.183 "model_number": "SPDK bdev Controller", 00:20:15.183 "max_namespaces": 2, 00:20:15.183 "min_cntlid": 1, 00:20:15.183 "max_cntlid": 65519, 00:20:15.183 "namespaces": [ 00:20:15.183 { 00:20:15.183 "nsid": 1, 00:20:15.183 "bdev_name": "Malloc0", 00:20:15.183 "name": "Malloc0", 00:20:15.183 "nguid": "8A81B609721A4A338C3C30F072FC79E1", 00:20:15.183 "uuid": "8a81b609-721a-4a33-8c3c-30f072fc79e1" 00:20:15.183 }, 00:20:15.183 { 00:20:15.183 "nsid": 2, 00:20:15.183 "bdev_name": "Malloc1", 00:20:15.183 "name": "Malloc1", 00:20:15.183 "nguid": "C28DB2E0D2EE457189C64FAE3B9E0DEF", 00:20:15.183 "uuid": "c28db2e0-d2ee-4571-89c6-4fae3b9e0def" 00:20:15.183 } 00:20:15.183 ] 00:20:15.183 } 00:20:15.183 ] 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 2922635 00:20:15.183 Asynchronous Event Request test 00:20:15.183 Attaching to 10.0.0.2 00:20:15.183 Attached to 10.0.0.2 00:20:15.183 Registering asynchronous event callbacks... 00:20:15.183 Starting namespace attribute notice tests for all controllers... 00:20:15.183 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:20:15.183 aer_cb - Changed Namespace 00:20:15.183 Cleaning up... 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.183 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:15.452 rmmod nvme_tcp 00:20:15.452 rmmod nvme_fabrics 00:20:15.452 rmmod nvme_keyring 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 2922362 ']' 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 2922362 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@950 -- # '[' -z 2922362 ']' 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # kill -0 2922362 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # uname 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2922362 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2922362' 00:20:15.452 killing process with pid 2922362 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@969 -- # kill 2922362 00:20:15.452 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@974 -- # wait 2922362 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:15.711 12:21:08 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:18.244 00:20:18.244 real 0m5.380s 00:20:18.244 user 0m4.232s 00:20:18.244 sys 0m1.856s 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 ************************************ 00:20:18.244 END TEST nvmf_aer 00:20:18.244 ************************************ 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:18.244 ************************************ 00:20:18.244 START TEST nvmf_async_init 00:20:18.244 ************************************ 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:20:18.244 * Looking for test storage... 00:20:18.244 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:20:18.244 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=e0dc6bc2267647d089dc0e1bb8116380 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:18.245 12:21:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:18.245 12:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:18.245 12:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:18.245 12:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:20:18.245 12:21:11 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:20.150 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:20.150 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:20.150 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:20.150 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:20.150 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:20.150 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:20:20.150 00:20:20.150 --- 10.0.0.2 ping statistics --- 00:20:20.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.150 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:20.150 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:20.150 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.117 ms 00:20:20.150 00:20:20.150 --- 10.0.0.1 ping statistics --- 00:20:20.150 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:20.150 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.150 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=2924726 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 2924726 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@831 -- # '[' -z 2924726 ']' 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:20.151 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:20.151 12:21:13 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:20.151 [2024-07-26 12:21:13.245727] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:20.151 [2024-07-26 12:21:13.245810] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:20.151 EAL: No free 2048 kB hugepages reported on node 1 00:20:20.151 [2024-07-26 12:21:13.318569] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:20.408 [2024-07-26 12:21:13.435757] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:20.408 [2024-07-26 12:21:13.435818] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:20.408 [2024-07-26 12:21:13.435834] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:20.408 [2024-07-26 12:21:13.435848] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:20.408 [2024-07-26 12:21:13.435860] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:20.408 [2024-07-26 12:21:13.435890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:20.974 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:20.974 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # return 0 00:20:20.974 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:20.974 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:20.974 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 [2024-07-26 12:21:14.249443] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 null0 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g e0dc6bc2267647d089dc0e1bb8116380 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.231 [2024-07-26 12:21:14.289713] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.231 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 nvme0n1 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 [ 00:20:21.490 { 00:20:21.490 "name": "nvme0n1", 00:20:21.490 "aliases": [ 00:20:21.490 "e0dc6bc2-2676-47d0-89dc-0e1bb8116380" 00:20:21.490 ], 00:20:21.490 "product_name": "NVMe disk", 00:20:21.490 "block_size": 512, 00:20:21.490 "num_blocks": 2097152, 00:20:21.490 "uuid": "e0dc6bc2-2676-47d0-89dc-0e1bb8116380", 00:20:21.490 "assigned_rate_limits": { 00:20:21.490 "rw_ios_per_sec": 0, 00:20:21.490 "rw_mbytes_per_sec": 0, 00:20:21.490 "r_mbytes_per_sec": 0, 00:20:21.490 "w_mbytes_per_sec": 0 00:20:21.490 }, 00:20:21.490 "claimed": false, 00:20:21.490 "zoned": false, 00:20:21.490 "supported_io_types": { 00:20:21.490 "read": true, 00:20:21.490 "write": true, 00:20:21.490 "unmap": false, 00:20:21.490 "flush": true, 00:20:21.490 "reset": true, 00:20:21.490 "nvme_admin": true, 00:20:21.490 "nvme_io": true, 00:20:21.490 "nvme_io_md": false, 00:20:21.490 "write_zeroes": true, 00:20:21.490 "zcopy": false, 00:20:21.490 "get_zone_info": false, 00:20:21.490 "zone_management": false, 00:20:21.490 "zone_append": false, 00:20:21.490 "compare": true, 00:20:21.490 "compare_and_write": true, 00:20:21.490 "abort": true, 00:20:21.490 "seek_hole": false, 00:20:21.490 "seek_data": false, 00:20:21.490 "copy": true, 00:20:21.490 "nvme_iov_md": false 00:20:21.490 }, 00:20:21.490 "memory_domains": [ 00:20:21.490 { 00:20:21.490 "dma_device_id": "system", 00:20:21.490 "dma_device_type": 1 00:20:21.490 } 00:20:21.490 ], 00:20:21.490 "driver_specific": { 00:20:21.490 "nvme": [ 00:20:21.490 { 00:20:21.490 "trid": { 00:20:21.490 "trtype": "TCP", 00:20:21.490 "adrfam": "IPv4", 00:20:21.490 "traddr": "10.0.0.2", 00:20:21.490 "trsvcid": "4420", 00:20:21.490 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:21.490 }, 00:20:21.490 "ctrlr_data": { 00:20:21.490 "cntlid": 1, 00:20:21.490 "vendor_id": "0x8086", 00:20:21.490 "model_number": "SPDK bdev Controller", 00:20:21.490 "serial_number": "00000000000000000000", 00:20:21.490 "firmware_revision": "24.09", 00:20:21.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.490 "oacs": { 00:20:21.490 "security": 0, 00:20:21.490 "format": 0, 00:20:21.490 "firmware": 0, 00:20:21.490 "ns_manage": 0 00:20:21.490 }, 00:20:21.490 "multi_ctrlr": true, 00:20:21.490 "ana_reporting": false 00:20:21.490 }, 00:20:21.490 "vs": { 00:20:21.490 "nvme_version": "1.3" 00:20:21.490 }, 00:20:21.490 "ns_data": { 00:20:21.490 "id": 1, 00:20:21.490 "can_share": true 00:20:21.490 } 00:20:21.490 } 00:20:21.490 ], 00:20:21.490 "mp_policy": "active_passive" 00:20:21.490 } 00:20:21.490 } 00:20:21.490 ] 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 [2024-07-26 12:21:14.542979] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:20:21.490 [2024-07-26 12:21:14.543081] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x120c1d0 (9): Bad file descriptor 00:20:21.490 [2024-07-26 12:21:14.685230] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 [ 00:20:21.490 { 00:20:21.490 "name": "nvme0n1", 00:20:21.490 "aliases": [ 00:20:21.490 "e0dc6bc2-2676-47d0-89dc-0e1bb8116380" 00:20:21.490 ], 00:20:21.490 "product_name": "NVMe disk", 00:20:21.490 "block_size": 512, 00:20:21.490 "num_blocks": 2097152, 00:20:21.490 "uuid": "e0dc6bc2-2676-47d0-89dc-0e1bb8116380", 00:20:21.490 "assigned_rate_limits": { 00:20:21.490 "rw_ios_per_sec": 0, 00:20:21.490 "rw_mbytes_per_sec": 0, 00:20:21.490 "r_mbytes_per_sec": 0, 00:20:21.490 "w_mbytes_per_sec": 0 00:20:21.490 }, 00:20:21.490 "claimed": false, 00:20:21.490 "zoned": false, 00:20:21.490 "supported_io_types": { 00:20:21.490 "read": true, 00:20:21.490 "write": true, 00:20:21.490 "unmap": false, 00:20:21.490 "flush": true, 00:20:21.490 "reset": true, 00:20:21.490 "nvme_admin": true, 00:20:21.490 "nvme_io": true, 00:20:21.490 "nvme_io_md": false, 00:20:21.490 "write_zeroes": true, 00:20:21.490 "zcopy": false, 00:20:21.490 "get_zone_info": false, 00:20:21.490 "zone_management": false, 00:20:21.490 "zone_append": false, 00:20:21.490 "compare": true, 00:20:21.490 "compare_and_write": true, 00:20:21.490 "abort": true, 00:20:21.490 "seek_hole": false, 00:20:21.490 "seek_data": false, 00:20:21.490 "copy": true, 00:20:21.490 "nvme_iov_md": false 00:20:21.490 }, 00:20:21.490 "memory_domains": [ 00:20:21.490 { 00:20:21.490 "dma_device_id": "system", 00:20:21.490 "dma_device_type": 1 00:20:21.490 } 00:20:21.490 ], 00:20:21.490 "driver_specific": { 00:20:21.490 "nvme": [ 00:20:21.490 { 00:20:21.490 "trid": { 00:20:21.490 "trtype": "TCP", 00:20:21.490 "adrfam": "IPv4", 00:20:21.490 "traddr": "10.0.0.2", 00:20:21.490 "trsvcid": "4420", 00:20:21.490 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:21.490 }, 00:20:21.490 "ctrlr_data": { 00:20:21.490 "cntlid": 2, 00:20:21.490 "vendor_id": "0x8086", 00:20:21.490 "model_number": "SPDK bdev Controller", 00:20:21.490 "serial_number": "00000000000000000000", 00:20:21.490 "firmware_revision": "24.09", 00:20:21.490 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.490 "oacs": { 00:20:21.490 "security": 0, 00:20:21.490 "format": 0, 00:20:21.490 "firmware": 0, 00:20:21.490 "ns_manage": 0 00:20:21.490 }, 00:20:21.490 "multi_ctrlr": true, 00:20:21.490 "ana_reporting": false 00:20:21.490 }, 00:20:21.490 "vs": { 00:20:21.490 "nvme_version": "1.3" 00:20:21.490 }, 00:20:21.490 "ns_data": { 00:20:21.490 "id": 1, 00:20:21.490 "can_share": true 00:20:21.490 } 00:20:21.490 } 00:20:21.490 ], 00:20:21.490 "mp_policy": "active_passive" 00:20:21.490 } 00:20:21.490 } 00:20:21.490 ] 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.490 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.PNLHz2EY67 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.PNLHz2EY67 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.491 [2024-07-26 12:21:14.735680] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:21.491 [2024-07-26 12:21:14.735852] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PNLHz2EY67 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.491 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.749 [2024-07-26 12:21:14.743689] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.PNLHz2EY67 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.749 [2024-07-26 12:21:14.751723] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:21.749 [2024-07-26 12:21:14.751797] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:21.749 nvme0n1 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.749 [ 00:20:21.749 { 00:20:21.749 "name": "nvme0n1", 00:20:21.749 "aliases": [ 00:20:21.749 "e0dc6bc2-2676-47d0-89dc-0e1bb8116380" 00:20:21.749 ], 00:20:21.749 "product_name": "NVMe disk", 00:20:21.749 "block_size": 512, 00:20:21.749 "num_blocks": 2097152, 00:20:21.749 "uuid": "e0dc6bc2-2676-47d0-89dc-0e1bb8116380", 00:20:21.749 "assigned_rate_limits": { 00:20:21.749 "rw_ios_per_sec": 0, 00:20:21.749 "rw_mbytes_per_sec": 0, 00:20:21.749 "r_mbytes_per_sec": 0, 00:20:21.749 "w_mbytes_per_sec": 0 00:20:21.749 }, 00:20:21.749 "claimed": false, 00:20:21.749 "zoned": false, 00:20:21.749 "supported_io_types": { 00:20:21.749 "read": true, 00:20:21.749 "write": true, 00:20:21.749 "unmap": false, 00:20:21.749 "flush": true, 00:20:21.749 "reset": true, 00:20:21.749 "nvme_admin": true, 00:20:21.749 "nvme_io": true, 00:20:21.749 "nvme_io_md": false, 00:20:21.749 "write_zeroes": true, 00:20:21.749 "zcopy": false, 00:20:21.749 "get_zone_info": false, 00:20:21.749 "zone_management": false, 00:20:21.749 "zone_append": false, 00:20:21.749 "compare": true, 00:20:21.749 "compare_and_write": true, 00:20:21.749 "abort": true, 00:20:21.749 "seek_hole": false, 00:20:21.749 "seek_data": false, 00:20:21.749 "copy": true, 00:20:21.749 "nvme_iov_md": false 00:20:21.749 }, 00:20:21.749 "memory_domains": [ 00:20:21.749 { 00:20:21.749 "dma_device_id": "system", 00:20:21.749 "dma_device_type": 1 00:20:21.749 } 00:20:21.749 ], 00:20:21.749 "driver_specific": { 00:20:21.749 "nvme": [ 00:20:21.749 { 00:20:21.749 "trid": { 00:20:21.749 "trtype": "TCP", 00:20:21.749 "adrfam": "IPv4", 00:20:21.749 "traddr": "10.0.0.2", 00:20:21.749 "trsvcid": "4421", 00:20:21.749 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:20:21.749 }, 00:20:21.749 "ctrlr_data": { 00:20:21.749 "cntlid": 3, 00:20:21.749 "vendor_id": "0x8086", 00:20:21.749 "model_number": "SPDK bdev Controller", 00:20:21.749 "serial_number": "00000000000000000000", 00:20:21.749 "firmware_revision": "24.09", 00:20:21.749 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:20:21.749 "oacs": { 00:20:21.749 "security": 0, 00:20:21.749 "format": 0, 00:20:21.749 "firmware": 0, 00:20:21.749 "ns_manage": 0 00:20:21.749 }, 00:20:21.749 "multi_ctrlr": true, 00:20:21.749 "ana_reporting": false 00:20:21.749 }, 00:20:21.749 "vs": { 00:20:21.749 "nvme_version": "1.3" 00:20:21.749 }, 00:20:21.749 "ns_data": { 00:20:21.749 "id": 1, 00:20:21.749 "can_share": true 00:20:21.749 } 00:20:21.749 } 00:20:21.749 ], 00:20:21.749 "mp_policy": "active_passive" 00:20:21.749 } 00:20:21.749 } 00:20:21.749 ] 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.PNLHz2EY67 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.749 rmmod nvme_tcp 00:20:21.749 rmmod nvme_fabrics 00:20:21.749 rmmod nvme_keyring 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 2924726 ']' 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 2924726 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@950 -- # '[' -z 2924726 ']' 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # kill -0 2924726 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # uname 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2924726 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2924726' 00:20:21.749 killing process with pid 2924726 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@969 -- # kill 2924726 00:20:21.749 [2024-07-26 12:21:14.936961] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:21.749 [2024-07-26 12:21:14.936995] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.749 12:21:14 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@974 -- # wait 2924726 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:22.008 12:21:15 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.544 00:20:24.544 real 0m6.286s 00:20:24.544 user 0m3.056s 00:20:24.544 sys 0m1.879s 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:20:24.544 ************************************ 00:20:24.544 END TEST nvmf_async_init 00:20:24.544 ************************************ 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.544 ************************************ 00:20:24.544 START TEST dma 00:20:24.544 ************************************ 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:20:24.544 * Looking for test storage... 00:20:24.544 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:20:24.544 00:20:24.544 real 0m0.069s 00:20:24.544 user 0m0.038s 00:20:24.544 sys 0m0.037s 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:20:24.544 ************************************ 00:20:24.544 END TEST dma 00:20:24.544 ************************************ 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:24.544 12:21:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:24.545 ************************************ 00:20:24.545 START TEST nvmf_identify 00:20:24.545 ************************************ 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:20:24.545 * Looking for test storage... 00:20:24.545 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.545 12:21:17 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:26.446 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:26.446 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:26.446 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:26.446 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:26.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:26.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.254 ms 00:20:26.446 00:20:26.446 --- 10.0.0.2 ping statistics --- 00:20:26.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.446 rtt min/avg/max/mdev = 0.254/0.254/0.254/0.000 ms 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:26.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:26.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.203 ms 00:20:26.446 00:20:26.446 --- 10.0.0.1 ping statistics --- 00:20:26.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:26.446 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:26.446 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=2926977 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 2926977 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@831 -- # '[' -z 2926977 ']' 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:26.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:26.447 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.447 [2024-07-26 12:21:19.596996] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:26.447 [2024-07-26 12:21:19.597126] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:26.447 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.447 [2024-07-26 12:21:19.670280] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:26.705 [2024-07-26 12:21:19.793365] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:26.705 [2024-07-26 12:21:19.793431] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:26.705 [2024-07-26 12:21:19.793448] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:26.705 [2024-07-26 12:21:19.793463] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:26.705 [2024-07-26 12:21:19.793475] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:26.705 [2024-07-26 12:21:19.793907] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:26.705 [2024-07-26 12:21:19.793980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:26.705 [2024-07-26 12:21:19.794004] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:26.705 [2024-07-26 12:21:19.794007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # return 0 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.705 [2024-07-26 12:21:19.927226] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.705 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:20:26.706 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:26.706 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.706 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:20:26.706 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.706 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 Malloc0 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.965 12:21:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 [2024-07-26 12:21:19.998364] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:26.965 [ 00:20:26.965 { 00:20:26.965 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:20:26.965 "subtype": "Discovery", 00:20:26.965 "listen_addresses": [ 00:20:26.965 { 00:20:26.965 "trtype": "TCP", 00:20:26.965 "adrfam": "IPv4", 00:20:26.965 "traddr": "10.0.0.2", 00:20:26.965 "trsvcid": "4420" 00:20:26.965 } 00:20:26.965 ], 00:20:26.965 "allow_any_host": true, 00:20:26.965 "hosts": [] 00:20:26.965 }, 00:20:26.965 { 00:20:26.965 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:20:26.965 "subtype": "NVMe", 00:20:26.965 "listen_addresses": [ 00:20:26.965 { 00:20:26.965 "trtype": "TCP", 00:20:26.965 "adrfam": "IPv4", 00:20:26.965 "traddr": "10.0.0.2", 00:20:26.965 "trsvcid": "4420" 00:20:26.965 } 00:20:26.965 ], 00:20:26.965 "allow_any_host": true, 00:20:26.965 "hosts": [], 00:20:26.965 "serial_number": "SPDK00000000000001", 00:20:26.965 "model_number": "SPDK bdev Controller", 00:20:26.965 "max_namespaces": 32, 00:20:26.965 "min_cntlid": 1, 00:20:26.965 "max_cntlid": 65519, 00:20:26.965 "namespaces": [ 00:20:26.965 { 00:20:26.965 "nsid": 1, 00:20:26.965 "bdev_name": "Malloc0", 00:20:26.965 "name": "Malloc0", 00:20:26.965 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:20:26.965 "eui64": "ABCDEF0123456789", 00:20:26.965 "uuid": "7b49f596-717d-4713-962a-2b6fc02506f7" 00:20:26.965 } 00:20:26.965 ] 00:20:26.965 } 00:20:26.965 ] 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:26.965 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:20:26.965 [2024-07-26 12:21:20.037624] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:26.965 [2024-07-26 12:21:20.037672] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927008 ] 00:20:26.965 EAL: No free 2048 kB hugepages reported on node 1 00:20:26.965 [2024-07-26 12:21:20.072435] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:20:26.965 [2024-07-26 12:21:20.072495] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:26.965 [2024-07-26 12:21:20.072506] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:26.965 [2024-07-26 12:21:20.072524] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:26.965 [2024-07-26 12:21:20.072538] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:26.965 [2024-07-26 12:21:20.072874] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:20:26.965 [2024-07-26 12:21:20.072930] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1239540 0 00:20:26.965 [2024-07-26 12:21:20.079033] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:26.965 [2024-07-26 12:21:20.079095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:26.965 [2024-07-26 12:21:20.079108] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:26.965 [2024-07-26 12:21:20.079114] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:26.965 [2024-07-26 12:21:20.079184] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.079197] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.079206] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.965 [2024-07-26 12:21:20.079226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:26.965 [2024-07-26 12:21:20.079253] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.965 [2024-07-26 12:21:20.083075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.965 [2024-07-26 12:21:20.083093] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.965 [2024-07-26 12:21:20.083100] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083108] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.965 [2024-07-26 12:21:20.083130] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:26.965 [2024-07-26 12:21:20.083142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:20:26.965 [2024-07-26 12:21:20.083152] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:20:26.965 [2024-07-26 12:21:20.083177] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083186] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.965 [2024-07-26 12:21:20.083204] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.965 [2024-07-26 12:21:20.083228] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.965 [2024-07-26 12:21:20.083446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.965 [2024-07-26 12:21:20.083458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.965 [2024-07-26 12:21:20.083465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.965 [2024-07-26 12:21:20.083485] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:20:26.965 [2024-07-26 12:21:20.083499] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:20:26.965 [2024-07-26 12:21:20.083512] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083519] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083525] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.965 [2024-07-26 12:21:20.083536] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.965 [2024-07-26 12:21:20.083572] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.965 [2024-07-26 12:21:20.083762] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.965 [2024-07-26 12:21:20.083778] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.965 [2024-07-26 12:21:20.083785] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.965 [2024-07-26 12:21:20.083796] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.965 [2024-07-26 12:21:20.083806] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:20:26.965 [2024-07-26 12:21:20.083821] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:20:26.965 [2024-07-26 12:21:20.083834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.083842] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.083848] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.083859] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.083880] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.083994] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.084006] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.084013] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084020] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.084029] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:26.966 [2024-07-26 12:21:20.084055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084075] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084082] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.084093] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.084124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.084239] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.084251] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.084257] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084264] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.084273] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:20:26.966 [2024-07-26 12:21:20.084282] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:20:26.966 [2024-07-26 12:21:20.084294] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:26.966 [2024-07-26 12:21:20.084418] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:20:26.966 [2024-07-26 12:21:20.084436] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:26.966 [2024-07-26 12:21:20.084451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084458] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084465] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.084475] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.084495] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.084680] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.084692] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.084699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084706] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.084714] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:26.966 [2024-07-26 12:21:20.084730] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084739] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084746] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.084756] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.084777] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.084895] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.084922] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.084928] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.084935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.084942] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:26.966 [2024-07-26 12:21:20.084951] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:20:26.966 [2024-07-26 12:21:20.084964] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:20:26.966 [2024-07-26 12:21:20.084979] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:20:26.966 [2024-07-26 12:21:20.084997] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085005] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.085016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.085037] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.085267] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.966 [2024-07-26 12:21:20.085283] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.966 [2024-07-26 12:21:20.085290] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085297] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1239540): datao=0, datal=4096, cccid=0 00:20:26.966 [2024-07-26 12:21:20.085305] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12993c0) on tqpair(0x1239540): expected_datao=0, payload_size=4096 00:20:26.966 [2024-07-26 12:21:20.085313] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085324] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085333] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.085385] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.085393] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085400] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.085416] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:20:26.966 [2024-07-26 12:21:20.085425] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:20:26.966 [2024-07-26 12:21:20.085433] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:20:26.966 [2024-07-26 12:21:20.085443] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:20:26.966 [2024-07-26 12:21:20.085451] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:20:26.966 [2024-07-26 12:21:20.085459] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:20:26.966 [2024-07-26 12:21:20.085474] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:20:26.966 [2024-07-26 12:21:20.085491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085500] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085506] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.085517] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:26.966 [2024-07-26 12:21:20.085539] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.085824] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.085841] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.085848] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.085868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085875] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085881] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.085891] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.966 [2024-07-26 12:21:20.085901] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085908] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085914] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.085938] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.966 [2024-07-26 12:21:20.085948] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085954] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085960] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.085969] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.966 [2024-07-26 12:21:20.085978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085985] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.085991] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.085999] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.966 [2024-07-26 12:21:20.086008] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:20:26.966 [2024-07-26 12:21:20.086030] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:26.966 [2024-07-26 12:21:20.086068] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.086078] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.086089] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.086112] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12993c0, cid 0, qid 0 00:20:26.966 [2024-07-26 12:21:20.086124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299540, cid 1, qid 0 00:20:26.966 [2024-07-26 12:21:20.086132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12996c0, cid 2, qid 0 00:20:26.966 [2024-07-26 12:21:20.086140] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.966 [2024-07-26 12:21:20.086149] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12999c0, cid 4, qid 0 00:20:26.966 [2024-07-26 12:21:20.086367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.966 [2024-07-26 12:21:20.086382] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.966 [2024-07-26 12:21:20.086389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.086396] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12999c0) on tqpair=0x1239540 00:20:26.966 [2024-07-26 12:21:20.086405] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:20:26.966 [2024-07-26 12:21:20.086414] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:20:26.966 [2024-07-26 12:21:20.086448] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.086458] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1239540) 00:20:26.966 [2024-07-26 12:21:20.086469] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.966 [2024-07-26 12:21:20.086489] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12999c0, cid 4, qid 0 00:20:26.966 [2024-07-26 12:21:20.086659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.966 [2024-07-26 12:21:20.086674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.966 [2024-07-26 12:21:20.086682] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.966 [2024-07-26 12:21:20.086688] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1239540): datao=0, datal=4096, cccid=4 00:20:26.966 [2024-07-26 12:21:20.086697] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12999c0) on tqpair(0x1239540): expected_datao=0, payload_size=4096 00:20:26.967 [2024-07-26 12:21:20.086705] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.086722] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.086731] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127200] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.967 [2024-07-26 12:21:20.127218] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.967 [2024-07-26 12:21:20.127226] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127233] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12999c0) on tqpair=0x1239540 00:20:26.967 [2024-07-26 12:21:20.127253] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:20:26.967 [2024-07-26 12:21:20.127291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127302] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1239540) 00:20:26.967 [2024-07-26 12:21:20.127318] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.967 [2024-07-26 12:21:20.127340] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127348] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127354] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1239540) 00:20:26.967 [2024-07-26 12:21:20.127363] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:26.967 [2024-07-26 12:21:20.127392] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12999c0, cid 4, qid 0 00:20:26.967 [2024-07-26 12:21:20.127404] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299b40, cid 5, qid 0 00:20:26.967 [2024-07-26 12:21:20.127578] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.967 [2024-07-26 12:21:20.127591] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.967 [2024-07-26 12:21:20.127598] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127604] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1239540): datao=0, datal=1024, cccid=4 00:20:26.967 [2024-07-26 12:21:20.127612] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12999c0) on tqpair(0x1239540): expected_datao=0, payload_size=1024 00:20:26.967 [2024-07-26 12:21:20.127620] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127630] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127637] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127646] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.967 [2024-07-26 12:21:20.127655] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.967 [2024-07-26 12:21:20.127661] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.127668] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299b40) on tqpair=0x1239540 00:20:26.967 [2024-07-26 12:21:20.168203] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.967 [2024-07-26 12:21:20.168222] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.967 [2024-07-26 12:21:20.168230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168238] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12999c0) on tqpair=0x1239540 00:20:26.967 [2024-07-26 12:21:20.168256] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168265] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1239540) 00:20:26.967 [2024-07-26 12:21:20.168277] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.967 [2024-07-26 12:21:20.168307] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12999c0, cid 4, qid 0 00:20:26.967 [2024-07-26 12:21:20.168454] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.967 [2024-07-26 12:21:20.168469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.967 [2024-07-26 12:21:20.168476] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168482] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1239540): datao=0, datal=3072, cccid=4 00:20:26.967 [2024-07-26 12:21:20.168490] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12999c0) on tqpair(0x1239540): expected_datao=0, payload_size=3072 00:20:26.967 [2024-07-26 12:21:20.168498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168508] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168516] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168582] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.967 [2024-07-26 12:21:20.168597] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.967 [2024-07-26 12:21:20.168603] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168610] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12999c0) on tqpair=0x1239540 00:20:26.967 [2024-07-26 12:21:20.168625] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168634] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1239540) 00:20:26.967 [2024-07-26 12:21:20.168645] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.967 [2024-07-26 12:21:20.168673] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x12999c0, cid 4, qid 0 00:20:26.967 [2024-07-26 12:21:20.168831] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:26.967 [2024-07-26 12:21:20.168845] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:26.967 [2024-07-26 12:21:20.168852] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168859] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1239540): datao=0, datal=8, cccid=4 00:20:26.967 [2024-07-26 12:21:20.168866] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x12999c0) on tqpair(0x1239540): expected_datao=0, payload_size=8 00:20:26.967 [2024-07-26 12:21:20.168874] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168884] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.168891] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.209245] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.967 [2024-07-26 12:21:20.209264] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.967 [2024-07-26 12:21:20.209271] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.967 [2024-07-26 12:21:20.209278] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12999c0) on tqpair=0x1239540 00:20:26.967 ===================================================== 00:20:26.967 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:20:26.967 ===================================================== 00:20:26.967 Controller Capabilities/Features 00:20:26.967 ================================ 00:20:26.967 Vendor ID: 0000 00:20:26.967 Subsystem Vendor ID: 0000 00:20:26.967 Serial Number: .................... 00:20:26.967 Model Number: ........................................ 00:20:26.967 Firmware Version: 24.09 00:20:26.967 Recommended Arb Burst: 0 00:20:26.967 IEEE OUI Identifier: 00 00 00 00:20:26.967 Multi-path I/O 00:20:26.967 May have multiple subsystem ports: No 00:20:26.967 May have multiple controllers: No 00:20:26.967 Associated with SR-IOV VF: No 00:20:26.967 Max Data Transfer Size: 131072 00:20:26.967 Max Number of Namespaces: 0 00:20:26.967 Max Number of I/O Queues: 1024 00:20:26.967 NVMe Specification Version (VS): 1.3 00:20:26.967 NVMe Specification Version (Identify): 1.3 00:20:26.967 Maximum Queue Entries: 128 00:20:26.967 Contiguous Queues Required: Yes 00:20:26.967 Arbitration Mechanisms Supported 00:20:26.967 Weighted Round Robin: Not Supported 00:20:26.967 Vendor Specific: Not Supported 00:20:26.967 Reset Timeout: 15000 ms 00:20:26.967 Doorbell Stride: 4 bytes 00:20:26.967 NVM Subsystem Reset: Not Supported 00:20:26.967 Command Sets Supported 00:20:26.967 NVM Command Set: Supported 00:20:26.967 Boot Partition: Not Supported 00:20:26.967 Memory Page Size Minimum: 4096 bytes 00:20:26.967 Memory Page Size Maximum: 4096 bytes 00:20:26.967 Persistent Memory Region: Not Supported 00:20:26.967 Optional Asynchronous Events Supported 00:20:26.967 Namespace Attribute Notices: Not Supported 00:20:26.967 Firmware Activation Notices: Not Supported 00:20:26.967 ANA Change Notices: Not Supported 00:20:26.967 PLE Aggregate Log Change Notices: Not Supported 00:20:26.967 LBA Status Info Alert Notices: Not Supported 00:20:26.967 EGE Aggregate Log Change Notices: Not Supported 00:20:26.967 Normal NVM Subsystem Shutdown event: Not Supported 00:20:26.967 Zone Descriptor Change Notices: Not Supported 00:20:26.967 Discovery Log Change Notices: Supported 00:20:26.967 Controller Attributes 00:20:26.967 128-bit Host Identifier: Not Supported 00:20:26.967 Non-Operational Permissive Mode: Not Supported 00:20:26.967 NVM Sets: Not Supported 00:20:26.967 Read Recovery Levels: Not Supported 00:20:26.967 Endurance Groups: Not Supported 00:20:26.967 Predictable Latency Mode: Not Supported 00:20:26.967 Traffic Based Keep ALive: Not Supported 00:20:26.967 Namespace Granularity: Not Supported 00:20:26.967 SQ Associations: Not Supported 00:20:26.967 UUID List: Not Supported 00:20:26.967 Multi-Domain Subsystem: Not Supported 00:20:26.967 Fixed Capacity Management: Not Supported 00:20:26.967 Variable Capacity Management: Not Supported 00:20:26.967 Delete Endurance Group: Not Supported 00:20:26.967 Delete NVM Set: Not Supported 00:20:26.967 Extended LBA Formats Supported: Not Supported 00:20:26.967 Flexible Data Placement Supported: Not Supported 00:20:26.967 00:20:26.967 Controller Memory Buffer Support 00:20:26.967 ================================ 00:20:26.967 Supported: No 00:20:26.967 00:20:26.967 Persistent Memory Region Support 00:20:26.967 ================================ 00:20:26.967 Supported: No 00:20:26.967 00:20:26.967 Admin Command Set Attributes 00:20:26.967 ============================ 00:20:26.967 Security Send/Receive: Not Supported 00:20:26.967 Format NVM: Not Supported 00:20:26.967 Firmware Activate/Download: Not Supported 00:20:26.967 Namespace Management: Not Supported 00:20:26.967 Device Self-Test: Not Supported 00:20:26.967 Directives: Not Supported 00:20:26.967 NVMe-MI: Not Supported 00:20:26.967 Virtualization Management: Not Supported 00:20:26.967 Doorbell Buffer Config: Not Supported 00:20:26.967 Get LBA Status Capability: Not Supported 00:20:26.967 Command & Feature Lockdown Capability: Not Supported 00:20:26.967 Abort Command Limit: 1 00:20:26.967 Async Event Request Limit: 4 00:20:26.967 Number of Firmware Slots: N/A 00:20:26.967 Firmware Slot 1 Read-Only: N/A 00:20:26.967 Firmware Activation Without Reset: N/A 00:20:26.968 Multiple Update Detection Support: N/A 00:20:26.968 Firmware Update Granularity: No Information Provided 00:20:26.968 Per-Namespace SMART Log: No 00:20:26.968 Asymmetric Namespace Access Log Page: Not Supported 00:20:26.968 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:20:26.968 Command Effects Log Page: Not Supported 00:20:26.968 Get Log Page Extended Data: Supported 00:20:26.968 Telemetry Log Pages: Not Supported 00:20:26.968 Persistent Event Log Pages: Not Supported 00:20:26.968 Supported Log Pages Log Page: May Support 00:20:26.968 Commands Supported & Effects Log Page: Not Supported 00:20:26.968 Feature Identifiers & Effects Log Page:May Support 00:20:26.968 NVMe-MI Commands & Effects Log Page: May Support 00:20:26.968 Data Area 4 for Telemetry Log: Not Supported 00:20:26.968 Error Log Page Entries Supported: 128 00:20:26.968 Keep Alive: Not Supported 00:20:26.968 00:20:26.968 NVM Command Set Attributes 00:20:26.968 ========================== 00:20:26.968 Submission Queue Entry Size 00:20:26.968 Max: 1 00:20:26.968 Min: 1 00:20:26.968 Completion Queue Entry Size 00:20:26.968 Max: 1 00:20:26.968 Min: 1 00:20:26.968 Number of Namespaces: 0 00:20:26.968 Compare Command: Not Supported 00:20:26.968 Write Uncorrectable Command: Not Supported 00:20:26.968 Dataset Management Command: Not Supported 00:20:26.968 Write Zeroes Command: Not Supported 00:20:26.968 Set Features Save Field: Not Supported 00:20:26.968 Reservations: Not Supported 00:20:26.968 Timestamp: Not Supported 00:20:26.968 Copy: Not Supported 00:20:26.968 Volatile Write Cache: Not Present 00:20:26.968 Atomic Write Unit (Normal): 1 00:20:26.968 Atomic Write Unit (PFail): 1 00:20:26.968 Atomic Compare & Write Unit: 1 00:20:26.968 Fused Compare & Write: Supported 00:20:26.968 Scatter-Gather List 00:20:26.968 SGL Command Set: Supported 00:20:26.968 SGL Keyed: Supported 00:20:26.968 SGL Bit Bucket Descriptor: Not Supported 00:20:26.968 SGL Metadata Pointer: Not Supported 00:20:26.968 Oversized SGL: Not Supported 00:20:26.968 SGL Metadata Address: Not Supported 00:20:26.968 SGL Offset: Supported 00:20:26.968 Transport SGL Data Block: Not Supported 00:20:26.968 Replay Protected Memory Block: Not Supported 00:20:26.968 00:20:26.968 Firmware Slot Information 00:20:26.968 ========================= 00:20:26.968 Active slot: 0 00:20:26.968 00:20:26.968 00:20:26.968 Error Log 00:20:26.968 ========= 00:20:26.968 00:20:26.968 Active Namespaces 00:20:26.968 ================= 00:20:26.968 Discovery Log Page 00:20:26.968 ================== 00:20:26.968 Generation Counter: 2 00:20:26.968 Number of Records: 2 00:20:26.968 Record Format: 0 00:20:26.968 00:20:26.968 Discovery Log Entry 0 00:20:26.968 ---------------------- 00:20:26.968 Transport Type: 3 (TCP) 00:20:26.968 Address Family: 1 (IPv4) 00:20:26.968 Subsystem Type: 3 (Current Discovery Subsystem) 00:20:26.968 Entry Flags: 00:20:26.968 Duplicate Returned Information: 1 00:20:26.968 Explicit Persistent Connection Support for Discovery: 1 00:20:26.968 Transport Requirements: 00:20:26.968 Secure Channel: Not Required 00:20:26.968 Port ID: 0 (0x0000) 00:20:26.968 Controller ID: 65535 (0xffff) 00:20:26.968 Admin Max SQ Size: 128 00:20:26.968 Transport Service Identifier: 4420 00:20:26.968 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:20:26.968 Transport Address: 10.0.0.2 00:20:26.968 Discovery Log Entry 1 00:20:26.968 ---------------------- 00:20:26.968 Transport Type: 3 (TCP) 00:20:26.968 Address Family: 1 (IPv4) 00:20:26.968 Subsystem Type: 2 (NVM Subsystem) 00:20:26.968 Entry Flags: 00:20:26.968 Duplicate Returned Information: 0 00:20:26.968 Explicit Persistent Connection Support for Discovery: 0 00:20:26.968 Transport Requirements: 00:20:26.968 Secure Channel: Not Required 00:20:26.968 Port ID: 0 (0x0000) 00:20:26.968 Controller ID: 65535 (0xffff) 00:20:26.968 Admin Max SQ Size: 128 00:20:26.968 Transport Service Identifier: 4420 00:20:26.968 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:20:26.968 Transport Address: 10.0.0.2 [2024-07-26 12:21:20.209394] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:20:26.968 [2024-07-26 12:21:20.209417] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12993c0) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.209430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.968 [2024-07-26 12:21:20.209439] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299540) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.209447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.968 [2024-07-26 12:21:20.209455] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x12996c0) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.209462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.968 [2024-07-26 12:21:20.209471] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.209478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:26.968 [2024-07-26 12:21:20.209497] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.209506] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.209513] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.968 [2024-07-26 12:21:20.209540] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.968 [2024-07-26 12:21:20.209565] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.968 [2024-07-26 12:21:20.209738] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.968 [2024-07-26 12:21:20.209757] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.968 [2024-07-26 12:21:20.209765] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.209772] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.209784] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.209792] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.209799] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.968 [2024-07-26 12:21:20.209809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.968 [2024-07-26 12:21:20.209836] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.968 [2024-07-26 12:21:20.209972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.968 [2024-07-26 12:21:20.209987] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.968 [2024-07-26 12:21:20.209993] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210000] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.210009] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:20:26.968 [2024-07-26 12:21:20.210018] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:20:26.968 [2024-07-26 12:21:20.210034] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210043] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.968 [2024-07-26 12:21:20.210068] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.968 [2024-07-26 12:21:20.210091] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.968 [2024-07-26 12:21:20.210261] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.968 [2024-07-26 12:21:20.210273] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.968 [2024-07-26 12:21:20.210280] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210287] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.210303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210312] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210319] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.968 [2024-07-26 12:21:20.210329] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.968 [2024-07-26 12:21:20.210350] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.968 [2024-07-26 12:21:20.210517] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.968 [2024-07-26 12:21:20.210529] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.968 [2024-07-26 12:21:20.210536] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210542] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.210558] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210567] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210574] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.968 [2024-07-26 12:21:20.210584] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.968 [2024-07-26 12:21:20.210608] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.968 [2024-07-26 12:21:20.210727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.968 [2024-07-26 12:21:20.210742] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.968 [2024-07-26 12:21:20.210749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.968 [2024-07-26 12:21:20.210772] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210782] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.968 [2024-07-26 12:21:20.210788] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.968 [2024-07-26 12:21:20.210798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.968 [2024-07-26 12:21:20.210819] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.968 [2024-07-26 12:21:20.210933] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.210948] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.210955] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.210961] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.210978] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.210987] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.210994] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.211004] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.211025] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.211189] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.211203] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.211210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211217] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.211233] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211242] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211248] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.211259] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.211280] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.211446] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.211458] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.211465] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211472] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.211487] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211497] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211503] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.211514] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.211534] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.211659] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.211674] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.211680] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211687] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.211704] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211713] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.211731] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.211751] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.211875] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.211890] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.211896] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211903] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.211920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.211936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.211946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.211967] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.212082] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.212096] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.212103] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212110] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.212126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212135] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.212152] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.212172] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.212290] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.212305] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.212311] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212318] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.212334] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212344] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212350] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.212361] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.212382] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.212498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.212513] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.212521] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212528] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.212544] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212553] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212559] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.212570] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.212590] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.212717] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.212732] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.212739] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.212762] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212777] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.212788] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:26.969 [2024-07-26 12:21:20.212809] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:26.969 [2024-07-26 12:21:20.212963] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:26.969 [2024-07-26 12:21:20.212977] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:26.969 [2024-07-26 12:21:20.212984] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.212991] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:26.969 [2024-07-26 12:21:20.213007] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.213017] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:26.969 [2024-07-26 12:21:20.213023] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:26.969 [2024-07-26 12:21:20.213034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.231 [2024-07-26 12:21:20.213054] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:27.231 [2024-07-26 12:21:20.217083] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.231 [2024-07-26 12:21:20.217095] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.231 [2024-07-26 12:21:20.217102] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.217109] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:27.231 [2024-07-26 12:21:20.217126] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.217136] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.217143] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1239540) 00:20:27.231 [2024-07-26 12:21:20.217153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.231 [2024-07-26 12:21:20.217176] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x1299840, cid 3, qid 0 00:20:27.231 [2024-07-26 12:21:20.217346] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.231 [2024-07-26 12:21:20.217361] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.231 [2024-07-26 12:21:20.217373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.217381] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x1299840) on tqpair=0x1239540 00:20:27.231 [2024-07-26 12:21:20.217394] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 7 milliseconds 00:20:27.231 00:20:27.231 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:20:27.231 [2024-07-26 12:21:20.249754] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:27.231 [2024-07-26 12:21:20.249797] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2927011 ] 00:20:27.231 EAL: No free 2048 kB hugepages reported on node 1 00:20:27.231 [2024-07-26 12:21:20.282850] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:20:27.231 [2024-07-26 12:21:20.282897] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:20:27.231 [2024-07-26 12:21:20.282907] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:20:27.231 [2024-07-26 12:21:20.282932] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:20:27.231 [2024-07-26 12:21:20.282943] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:20:27.231 [2024-07-26 12:21:20.283148] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:20:27.231 [2024-07-26 12:21:20.283187] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x23a6540 0 00:20:27.231 [2024-07-26 12:21:20.290075] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:20:27.231 [2024-07-26 12:21:20.290100] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:20:27.231 [2024-07-26 12:21:20.290109] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:20:27.231 [2024-07-26 12:21:20.290116] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:20:27.231 [2024-07-26 12:21:20.290154] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.290166] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.290173] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.231 [2024-07-26 12:21:20.290187] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:20:27.231 [2024-07-26 12:21:20.290214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.231 [2024-07-26 12:21:20.297087] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.231 [2024-07-26 12:21:20.297104] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.231 [2024-07-26 12:21:20.297111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297118] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.231 [2024-07-26 12:21:20.297132] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:20:27.231 [2024-07-26 12:21:20.297142] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:20:27.231 [2024-07-26 12:21:20.297152] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:20:27.231 [2024-07-26 12:21:20.297172] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297184] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297191] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.231 [2024-07-26 12:21:20.297203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.231 [2024-07-26 12:21:20.297226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.231 [2024-07-26 12:21:20.297395] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.231 [2024-07-26 12:21:20.297410] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.231 [2024-07-26 12:21:20.297416] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297423] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.231 [2024-07-26 12:21:20.297435] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:20:27.231 [2024-07-26 12:21:20.297449] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:20:27.231 [2024-07-26 12:21:20.297462] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297469] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297475] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.231 [2024-07-26 12:21:20.297486] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.231 [2024-07-26 12:21:20.297507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.231 [2024-07-26 12:21:20.297639] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.231 [2024-07-26 12:21:20.297650] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.231 [2024-07-26 12:21:20.297657] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297663] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.231 [2024-07-26 12:21:20.297671] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:20:27.231 [2024-07-26 12:21:20.297685] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:20:27.231 [2024-07-26 12:21:20.297697] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297705] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.231 [2024-07-26 12:21:20.297721] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.231 [2024-07-26 12:21:20.297741] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.231 [2024-07-26 12:21:20.297873] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.231 [2024-07-26 12:21:20.297884] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.231 [2024-07-26 12:21:20.297891] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297897] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.231 [2024-07-26 12:21:20.297905] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:20:27.231 [2024-07-26 12:21:20.297920] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.231 [2024-07-26 12:21:20.297929] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.297936] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.297946] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.232 [2024-07-26 12:21:20.297970] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.232 [2024-07-26 12:21:20.298113] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.232 [2024-07-26 12:21:20.298129] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.232 [2024-07-26 12:21:20.298136] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298143] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.232 [2024-07-26 12:21:20.298150] nvme_ctrlr.c:3873:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:20:27.232 [2024-07-26 12:21:20.298159] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:20:27.232 [2024-07-26 12:21:20.298173] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:20:27.232 [2024-07-26 12:21:20.298283] nvme_ctrlr.c:4066:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:20:27.232 [2024-07-26 12:21:20.298291] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:20:27.232 [2024-07-26 12:21:20.298303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298310] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298317] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.298327] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.232 [2024-07-26 12:21:20.298349] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.232 [2024-07-26 12:21:20.298516] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.232 [2024-07-26 12:21:20.298528] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.232 [2024-07-26 12:21:20.298535] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298541] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.232 [2024-07-26 12:21:20.298549] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:20:27.232 [2024-07-26 12:21:20.298565] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298574] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298580] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.298590] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.232 [2024-07-26 12:21:20.298611] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.232 [2024-07-26 12:21:20.298742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.232 [2024-07-26 12:21:20.298754] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.232 [2024-07-26 12:21:20.298761] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298767] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.232 [2024-07-26 12:21:20.298775] nvme_ctrlr.c:3908:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:20:27.232 [2024-07-26 12:21:20.298783] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:20:27.232 [2024-07-26 12:21:20.298796] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:20:27.232 [2024-07-26 12:21:20.298812] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:20:27.232 [2024-07-26 12:21:20.298826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.298834] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.298845] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.232 [2024-07-26 12:21:20.298865] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.232 [2024-07-26 12:21:20.299019] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.232 [2024-07-26 12:21:20.299049] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.232 [2024-07-26 12:21:20.299056] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.299073] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=4096, cccid=0 00:20:27.232 [2024-07-26 12:21:20.299081] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24063c0) on tqpair(0x23a6540): expected_datao=0, payload_size=4096 00:20:27.232 [2024-07-26 12:21:20.299089] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.299108] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.299118] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341162] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.232 [2024-07-26 12:21:20.341196] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.232 [2024-07-26 12:21:20.341204] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341211] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.232 [2024-07-26 12:21:20.341222] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:20:27.232 [2024-07-26 12:21:20.341231] nvme_ctrlr.c:2061:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:20:27.232 [2024-07-26 12:21:20.341239] nvme_ctrlr.c:2064:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:20:27.232 [2024-07-26 12:21:20.341246] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:20:27.232 [2024-07-26 12:21:20.341254] nvme_ctrlr.c:2103:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:20:27.232 [2024-07-26 12:21:20.341262] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:20:27.232 [2024-07-26 12:21:20.341277] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:20:27.232 [2024-07-26 12:21:20.341295] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341304] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341311] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.341322] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.232 [2024-07-26 12:21:20.341360] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.232 [2024-07-26 12:21:20.341509] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.232 [2024-07-26 12:21:20.341521] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.232 [2024-07-26 12:21:20.341528] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341535] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.232 [2024-07-26 12:21:20.341545] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341557] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341564] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.341575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.232 [2024-07-26 12:21:20.341585] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341592] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341599] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.341608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.232 [2024-07-26 12:21:20.341617] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341624] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341646] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.341655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.232 [2024-07-26 12:21:20.341665] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341671] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341677] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.341686] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.232 [2024-07-26 12:21:20.341709] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:20:27.232 [2024-07-26 12:21:20.341729] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:20:27.232 [2024-07-26 12:21:20.341742] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.232 [2024-07-26 12:21:20.341749] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.232 [2024-07-26 12:21:20.341760] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.232 [2024-07-26 12:21:20.341783] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24063c0, cid 0, qid 0 00:20:27.232 [2024-07-26 12:21:20.341794] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406540, cid 1, qid 0 00:20:27.232 [2024-07-26 12:21:20.341802] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24066c0, cid 2, qid 0 00:20:27.232 [2024-07-26 12:21:20.341810] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.233 [2024-07-26 12:21:20.341817] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.233 [2024-07-26 12:21:20.341967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.233 [2024-07-26 12:21:20.341979] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.233 [2024-07-26 12:21:20.341986] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.341993] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.233 [2024-07-26 12:21:20.342000] nvme_ctrlr.c:3026:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:20:27.233 [2024-07-26 12:21:20.342009] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342028] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342052] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342074] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342083] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342091] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.233 [2024-07-26 12:21:20.342101] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:20:27.233 [2024-07-26 12:21:20.342124] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.233 [2024-07-26 12:21:20.342256] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.233 [2024-07-26 12:21:20.342268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.233 [2024-07-26 12:21:20.342274] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342281] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.233 [2024-07-26 12:21:20.342349] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342370] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342385] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342393] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.233 [2024-07-26 12:21:20.342404] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.233 [2024-07-26 12:21:20.342439] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.233 [2024-07-26 12:21:20.342611] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.233 [2024-07-26 12:21:20.342626] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.233 [2024-07-26 12:21:20.342633] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342639] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=4096, cccid=4 00:20:27.233 [2024-07-26 12:21:20.342647] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24069c0) on tqpair(0x23a6540): expected_datao=0, payload_size=4096 00:20:27.233 [2024-07-26 12:21:20.342654] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342664] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342672] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342689] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.233 [2024-07-26 12:21:20.342700] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.233 [2024-07-26 12:21:20.342706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342713] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.233 [2024-07-26 12:21:20.342734] nvme_ctrlr.c:4697:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:20:27.233 [2024-07-26 12:21:20.342750] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342767] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.342781] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.342803] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.233 [2024-07-26 12:21:20.342813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.233 [2024-07-26 12:21:20.342837] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.233 [2024-07-26 12:21:20.343007] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.233 [2024-07-26 12:21:20.343022] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.233 [2024-07-26 12:21:20.343029] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343050] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=4096, cccid=4 00:20:27.233 [2024-07-26 12:21:20.343066] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24069c0) on tqpair(0x23a6540): expected_datao=0, payload_size=4096 00:20:27.233 [2024-07-26 12:21:20.343075] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343086] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343094] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343118] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.233 [2024-07-26 12:21:20.343128] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.233 [2024-07-26 12:21:20.343134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343141] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.233 [2024-07-26 12:21:20.343165] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343185] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343200] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343208] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.233 [2024-07-26 12:21:20.343219] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.233 [2024-07-26 12:21:20.343241] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.233 [2024-07-26 12:21:20.343382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.233 [2024-07-26 12:21:20.343413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.233 [2024-07-26 12:21:20.343420] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343427] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=4096, cccid=4 00:20:27.233 [2024-07-26 12:21:20.343435] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24069c0) on tqpair(0x23a6540): expected_datao=0, payload_size=4096 00:20:27.233 [2024-07-26 12:21:20.343442] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343453] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343461] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343487] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.233 [2024-07-26 12:21:20.343498] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.233 [2024-07-26 12:21:20.343505] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343511] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.233 [2024-07-26 12:21:20.343524] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343539] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343554] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343569] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343578] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343587] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343595] nvme_ctrlr.c:3114:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:20:27.233 [2024-07-26 12:21:20.343603] nvme_ctrlr.c:1557:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:20:27.233 [2024-07-26 12:21:20.343612] nvme_ctrlr.c:1563:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:20:27.233 [2024-07-26 12:21:20.343630] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343638] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.233 [2024-07-26 12:21:20.343664] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.233 [2024-07-26 12:21:20.343675] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343682] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.233 [2024-07-26 12:21:20.343688] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a6540) 00:20:27.233 [2024-07-26 12:21:20.343697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:20:27.233 [2024-07-26 12:21:20.343721] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.233 [2024-07-26 12:21:20.343748] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406b40, cid 5, qid 0 00:20:27.233 [2024-07-26 12:21:20.343879] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.343891] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.343898] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.343904] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.343914] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.343923] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.343929] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.343935] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406b40) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.343950] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.343959] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.343970] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.343991] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406b40, cid 5, qid 0 00:20:27.234 [2024-07-26 12:21:20.344136] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.344151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.344158] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344165] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406b40) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.344181] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344190] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.344201] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.344226] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406b40, cid 5, qid 0 00:20:27.234 [2024-07-26 12:21:20.344366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.344381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.344388] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344395] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406b40) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.344411] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344421] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.344432] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.344453] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406b40, cid 5, qid 0 00:20:27.234 [2024-07-26 12:21:20.344566] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.344579] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.344585] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344592] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406b40) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.344614] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344625] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.344635] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.344648] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344655] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.344665] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.344676] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344683] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.344692] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.344704] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.344711] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23a6540) 00:20:27.234 [2024-07-26 12:21:20.344720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.234 [2024-07-26 12:21:20.344756] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406b40, cid 5, qid 0 00:20:27.234 [2024-07-26 12:21:20.344767] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x24069c0, cid 4, qid 0 00:20:27.234 [2024-07-26 12:21:20.344774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406cc0, cid 6, qid 0 00:20:27.234 [2024-07-26 12:21:20.344781] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406e40, cid 7, qid 0 00:20:27.234 [2024-07-26 12:21:20.345009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.234 [2024-07-26 12:21:20.345026] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.234 [2024-07-26 12:21:20.345033] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.345053] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=8192, cccid=5 00:20:27.234 [2024-07-26 12:21:20.349077] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2406b40) on tqpair(0x23a6540): expected_datao=0, payload_size=8192 00:20:27.234 [2024-07-26 12:21:20.349090] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349112] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349122] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349134] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.234 [2024-07-26 12:21:20.349145] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.234 [2024-07-26 12:21:20.349152] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349158] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=512, cccid=4 00:20:27.234 [2024-07-26 12:21:20.349166] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x24069c0) on tqpair(0x23a6540): expected_datao=0, payload_size=512 00:20:27.234 [2024-07-26 12:21:20.349173] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349183] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349190] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349199] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.234 [2024-07-26 12:21:20.349207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.234 [2024-07-26 12:21:20.349214] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349220] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=512, cccid=6 00:20:27.234 [2024-07-26 12:21:20.349228] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2406cc0) on tqpair(0x23a6540): expected_datao=0, payload_size=512 00:20:27.234 [2024-07-26 12:21:20.349235] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349245] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349252] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:20:27.234 [2024-07-26 12:21:20.349269] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:20:27.234 [2024-07-26 12:21:20.349275] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349282] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x23a6540): datao=0, datal=4096, cccid=7 00:20:27.234 [2024-07-26 12:21:20.349289] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2406e40) on tqpair(0x23a6540): expected_datao=0, payload_size=4096 00:20:27.234 [2024-07-26 12:21:20.349297] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349307] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349314] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349322] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.349346] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.349363] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349369] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406b40) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.349388] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.349413] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.349420] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349426] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24069c0) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.349441] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.349451] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.349460] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349466] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406cc0) on tqpair=0x23a6540 00:20:27.234 [2024-07-26 12:21:20.349477] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.234 [2024-07-26 12:21:20.349485] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.234 [2024-07-26 12:21:20.349492] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.234 [2024-07-26 12:21:20.349498] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406e40) on tqpair=0x23a6540 00:20:27.234 ===================================================== 00:20:27.234 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:27.234 ===================================================== 00:20:27.234 Controller Capabilities/Features 00:20:27.234 ================================ 00:20:27.235 Vendor ID: 8086 00:20:27.235 Subsystem Vendor ID: 8086 00:20:27.235 Serial Number: SPDK00000000000001 00:20:27.235 Model Number: SPDK bdev Controller 00:20:27.235 Firmware Version: 24.09 00:20:27.235 Recommended Arb Burst: 6 00:20:27.235 IEEE OUI Identifier: e4 d2 5c 00:20:27.235 Multi-path I/O 00:20:27.235 May have multiple subsystem ports: Yes 00:20:27.235 May have multiple controllers: Yes 00:20:27.235 Associated with SR-IOV VF: No 00:20:27.235 Max Data Transfer Size: 131072 00:20:27.235 Max Number of Namespaces: 32 00:20:27.235 Max Number of I/O Queues: 127 00:20:27.235 NVMe Specification Version (VS): 1.3 00:20:27.235 NVMe Specification Version (Identify): 1.3 00:20:27.235 Maximum Queue Entries: 128 00:20:27.235 Contiguous Queues Required: Yes 00:20:27.235 Arbitration Mechanisms Supported 00:20:27.235 Weighted Round Robin: Not Supported 00:20:27.235 Vendor Specific: Not Supported 00:20:27.235 Reset Timeout: 15000 ms 00:20:27.235 Doorbell Stride: 4 bytes 00:20:27.235 NVM Subsystem Reset: Not Supported 00:20:27.235 Command Sets Supported 00:20:27.235 NVM Command Set: Supported 00:20:27.235 Boot Partition: Not Supported 00:20:27.235 Memory Page Size Minimum: 4096 bytes 00:20:27.235 Memory Page Size Maximum: 4096 bytes 00:20:27.235 Persistent Memory Region: Not Supported 00:20:27.235 Optional Asynchronous Events Supported 00:20:27.235 Namespace Attribute Notices: Supported 00:20:27.235 Firmware Activation Notices: Not Supported 00:20:27.235 ANA Change Notices: Not Supported 00:20:27.235 PLE Aggregate Log Change Notices: Not Supported 00:20:27.235 LBA Status Info Alert Notices: Not Supported 00:20:27.235 EGE Aggregate Log Change Notices: Not Supported 00:20:27.235 Normal NVM Subsystem Shutdown event: Not Supported 00:20:27.235 Zone Descriptor Change Notices: Not Supported 00:20:27.235 Discovery Log Change Notices: Not Supported 00:20:27.235 Controller Attributes 00:20:27.235 128-bit Host Identifier: Supported 00:20:27.235 Non-Operational Permissive Mode: Not Supported 00:20:27.235 NVM Sets: Not Supported 00:20:27.235 Read Recovery Levels: Not Supported 00:20:27.235 Endurance Groups: Not Supported 00:20:27.235 Predictable Latency Mode: Not Supported 00:20:27.235 Traffic Based Keep ALive: Not Supported 00:20:27.235 Namespace Granularity: Not Supported 00:20:27.235 SQ Associations: Not Supported 00:20:27.235 UUID List: Not Supported 00:20:27.235 Multi-Domain Subsystem: Not Supported 00:20:27.235 Fixed Capacity Management: Not Supported 00:20:27.235 Variable Capacity Management: Not Supported 00:20:27.235 Delete Endurance Group: Not Supported 00:20:27.235 Delete NVM Set: Not Supported 00:20:27.235 Extended LBA Formats Supported: Not Supported 00:20:27.235 Flexible Data Placement Supported: Not Supported 00:20:27.235 00:20:27.235 Controller Memory Buffer Support 00:20:27.235 ================================ 00:20:27.235 Supported: No 00:20:27.235 00:20:27.235 Persistent Memory Region Support 00:20:27.235 ================================ 00:20:27.235 Supported: No 00:20:27.235 00:20:27.235 Admin Command Set Attributes 00:20:27.235 ============================ 00:20:27.235 Security Send/Receive: Not Supported 00:20:27.235 Format NVM: Not Supported 00:20:27.235 Firmware Activate/Download: Not Supported 00:20:27.235 Namespace Management: Not Supported 00:20:27.235 Device Self-Test: Not Supported 00:20:27.235 Directives: Not Supported 00:20:27.235 NVMe-MI: Not Supported 00:20:27.235 Virtualization Management: Not Supported 00:20:27.235 Doorbell Buffer Config: Not Supported 00:20:27.235 Get LBA Status Capability: Not Supported 00:20:27.235 Command & Feature Lockdown Capability: Not Supported 00:20:27.235 Abort Command Limit: 4 00:20:27.235 Async Event Request Limit: 4 00:20:27.235 Number of Firmware Slots: N/A 00:20:27.235 Firmware Slot 1 Read-Only: N/A 00:20:27.235 Firmware Activation Without Reset: N/A 00:20:27.235 Multiple Update Detection Support: N/A 00:20:27.235 Firmware Update Granularity: No Information Provided 00:20:27.235 Per-Namespace SMART Log: No 00:20:27.235 Asymmetric Namespace Access Log Page: Not Supported 00:20:27.235 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:20:27.235 Command Effects Log Page: Supported 00:20:27.235 Get Log Page Extended Data: Supported 00:20:27.235 Telemetry Log Pages: Not Supported 00:20:27.235 Persistent Event Log Pages: Not Supported 00:20:27.235 Supported Log Pages Log Page: May Support 00:20:27.235 Commands Supported & Effects Log Page: Not Supported 00:20:27.235 Feature Identifiers & Effects Log Page:May Support 00:20:27.235 NVMe-MI Commands & Effects Log Page: May Support 00:20:27.235 Data Area 4 for Telemetry Log: Not Supported 00:20:27.235 Error Log Page Entries Supported: 128 00:20:27.235 Keep Alive: Supported 00:20:27.235 Keep Alive Granularity: 10000 ms 00:20:27.235 00:20:27.235 NVM Command Set Attributes 00:20:27.235 ========================== 00:20:27.235 Submission Queue Entry Size 00:20:27.235 Max: 64 00:20:27.235 Min: 64 00:20:27.235 Completion Queue Entry Size 00:20:27.235 Max: 16 00:20:27.235 Min: 16 00:20:27.235 Number of Namespaces: 32 00:20:27.235 Compare Command: Supported 00:20:27.235 Write Uncorrectable Command: Not Supported 00:20:27.235 Dataset Management Command: Supported 00:20:27.235 Write Zeroes Command: Supported 00:20:27.235 Set Features Save Field: Not Supported 00:20:27.235 Reservations: Supported 00:20:27.235 Timestamp: Not Supported 00:20:27.235 Copy: Supported 00:20:27.235 Volatile Write Cache: Present 00:20:27.235 Atomic Write Unit (Normal): 1 00:20:27.235 Atomic Write Unit (PFail): 1 00:20:27.235 Atomic Compare & Write Unit: 1 00:20:27.235 Fused Compare & Write: Supported 00:20:27.235 Scatter-Gather List 00:20:27.235 SGL Command Set: Supported 00:20:27.235 SGL Keyed: Supported 00:20:27.235 SGL Bit Bucket Descriptor: Not Supported 00:20:27.235 SGL Metadata Pointer: Not Supported 00:20:27.235 Oversized SGL: Not Supported 00:20:27.235 SGL Metadata Address: Not Supported 00:20:27.235 SGL Offset: Supported 00:20:27.235 Transport SGL Data Block: Not Supported 00:20:27.235 Replay Protected Memory Block: Not Supported 00:20:27.235 00:20:27.235 Firmware Slot Information 00:20:27.235 ========================= 00:20:27.235 Active slot: 1 00:20:27.235 Slot 1 Firmware Revision: 24.09 00:20:27.235 00:20:27.235 00:20:27.235 Commands Supported and Effects 00:20:27.235 ============================== 00:20:27.235 Admin Commands 00:20:27.235 -------------- 00:20:27.235 Get Log Page (02h): Supported 00:20:27.235 Identify (06h): Supported 00:20:27.235 Abort (08h): Supported 00:20:27.235 Set Features (09h): Supported 00:20:27.235 Get Features (0Ah): Supported 00:20:27.235 Asynchronous Event Request (0Ch): Supported 00:20:27.235 Keep Alive (18h): Supported 00:20:27.235 I/O Commands 00:20:27.235 ------------ 00:20:27.235 Flush (00h): Supported LBA-Change 00:20:27.235 Write (01h): Supported LBA-Change 00:20:27.235 Read (02h): Supported 00:20:27.235 Compare (05h): Supported 00:20:27.235 Write Zeroes (08h): Supported LBA-Change 00:20:27.235 Dataset Management (09h): Supported LBA-Change 00:20:27.235 Copy (19h): Supported LBA-Change 00:20:27.235 00:20:27.235 Error Log 00:20:27.235 ========= 00:20:27.235 00:20:27.235 Arbitration 00:20:27.235 =========== 00:20:27.235 Arbitration Burst: 1 00:20:27.235 00:20:27.235 Power Management 00:20:27.235 ================ 00:20:27.235 Number of Power States: 1 00:20:27.235 Current Power State: Power State #0 00:20:27.235 Power State #0: 00:20:27.235 Max Power: 0.00 W 00:20:27.235 Non-Operational State: Operational 00:20:27.235 Entry Latency: Not Reported 00:20:27.235 Exit Latency: Not Reported 00:20:27.235 Relative Read Throughput: 0 00:20:27.235 Relative Read Latency: 0 00:20:27.235 Relative Write Throughput: 0 00:20:27.235 Relative Write Latency: 0 00:20:27.235 Idle Power: Not Reported 00:20:27.235 Active Power: Not Reported 00:20:27.235 Non-Operational Permissive Mode: Not Supported 00:20:27.235 00:20:27.235 Health Information 00:20:27.235 ================== 00:20:27.236 Critical Warnings: 00:20:27.236 Available Spare Space: OK 00:20:27.236 Temperature: OK 00:20:27.236 Device Reliability: OK 00:20:27.236 Read Only: No 00:20:27.236 Volatile Memory Backup: OK 00:20:27.236 Current Temperature: 0 Kelvin (-273 Celsius) 00:20:27.236 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:20:27.236 Available Spare: 0% 00:20:27.236 Available Spare Threshold: 0% 00:20:27.236 Life Percentage Used:[2024-07-26 12:21:20.349605] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.349617] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.349628] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.349650] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406e40, cid 7, qid 0 00:20:27.236 [2024-07-26 12:21:20.349798] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.349811] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.349818] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.349824] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406e40) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.349864] nvme_ctrlr.c:4361:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:20:27.236 [2024-07-26 12:21:20.349883] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24063c0) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.349893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.236 [2024-07-26 12:21:20.349902] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406540) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.349910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.236 [2024-07-26 12:21:20.349918] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x24066c0) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.349926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.236 [2024-07-26 12:21:20.349934] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.349942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:20:27.236 [2024-07-26 12:21:20.349968] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.349976] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.349983] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.349993] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.350014] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.236 [2024-07-26 12:21:20.350174] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.350188] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.350195] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350202] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.350214] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350221] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350228] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.350242] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.350270] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.236 [2024-07-26 12:21:20.350460] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.350472] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.350479] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350485] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.350493] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:20:27.236 [2024-07-26 12:21:20.350501] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:20:27.236 [2024-07-26 12:21:20.350516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.350558] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.350578] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.236 [2024-07-26 12:21:20.350727] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.350743] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.350749] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350756] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.350774] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350783] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350790] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.350801] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.350822] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.236 [2024-07-26 12:21:20.350948] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.350963] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.350970] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.350976] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.350993] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.351002] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.351009] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.351019] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.351040] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.236 [2024-07-26 12:21:20.351210] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.351226] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.351232] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.351239] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.351256] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.351269] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.351277] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.236 [2024-07-26 12:21:20.351288] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.236 [2024-07-26 12:21:20.351309] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.236 [2024-07-26 12:21:20.351475] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.236 [2024-07-26 12:21:20.351487] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.236 [2024-07-26 12:21:20.351494] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.236 [2024-07-26 12:21:20.351501] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.236 [2024-07-26 12:21:20.351516] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351526] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351532] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.351543] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.351562] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.351685] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.351699] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.351706] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.351728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351737] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351744] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.351754] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.351774] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.351890] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.351904] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.351911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351917] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.351933] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351943] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.351949] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.351960] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.351979] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.352149] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.352165] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.352172] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352179] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.352196] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352206] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352216] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.352228] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.352249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.352381] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.352393] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.352400] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352407] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.352422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352431] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352438] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.352448] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.352468] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.352587] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.352601] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.352608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352616] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.352633] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352642] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352649] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.352660] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.352680] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.352794] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.352809] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.352815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352822] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.352838] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352847] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.352854] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.352865] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.352885] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.353001] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.353015] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.353022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.353029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.355105] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.355121] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.355128] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x23a6540) 00:20:27.237 [2024-07-26 12:21:20.355143] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:20:27.237 [2024-07-26 12:21:20.355167] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2406840, cid 3, qid 0 00:20:27.237 [2024-07-26 12:21:20.355335] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:20:27.237 [2024-07-26 12:21:20.355348] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:20:27.237 [2024-07-26 12:21:20.355355] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:20:27.237 [2024-07-26 12:21:20.355376] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2406840) on tqpair=0x23a6540 00:20:27.237 [2024-07-26 12:21:20.355389] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 4 milliseconds 00:20:27.237 0% 00:20:27.237 Data Units Read: 0 00:20:27.237 Data Units Written: 0 00:20:27.237 Host Read Commands: 0 00:20:27.237 Host Write Commands: 0 00:20:27.237 Controller Busy Time: 0 minutes 00:20:27.237 Power Cycles: 0 00:20:27.237 Power On Hours: 0 hours 00:20:27.237 Unsafe Shutdowns: 0 00:20:27.237 Unrecoverable Media Errors: 0 00:20:27.237 Lifetime Error Log Entries: 0 00:20:27.237 Warning Temperature Time: 0 minutes 00:20:27.237 Critical Temperature Time: 0 minutes 00:20:27.237 00:20:27.237 Number of Queues 00:20:27.237 ================ 00:20:27.237 Number of I/O Submission Queues: 127 00:20:27.237 Number of I/O Completion Queues: 127 00:20:27.237 00:20:27.237 Active Namespaces 00:20:27.237 ================= 00:20:27.237 Namespace ID:1 00:20:27.237 Error Recovery Timeout: Unlimited 00:20:27.237 Command Set Identifier: NVM (00h) 00:20:27.237 Deallocate: Supported 00:20:27.237 Deallocated/Unwritten Error: Not Supported 00:20:27.237 Deallocated Read Value: Unknown 00:20:27.237 Deallocate in Write Zeroes: Not Supported 00:20:27.237 Deallocated Guard Field: 0xFFFF 00:20:27.237 Flush: Supported 00:20:27.237 Reservation: Supported 00:20:27.237 Namespace Sharing Capabilities: Multiple Controllers 00:20:27.237 Size (in LBAs): 131072 (0GiB) 00:20:27.237 Capacity (in LBAs): 131072 (0GiB) 00:20:27.237 Utilization (in LBAs): 131072 (0GiB) 00:20:27.237 NGUID: ABCDEF0123456789ABCDEF0123456789 00:20:27.237 EUI64: ABCDEF0123456789 00:20:27.237 UUID: 7b49f596-717d-4713-962a-2b6fc02506f7 00:20:27.237 Thin Provisioning: Not Supported 00:20:27.237 Per-NS Atomic Units: Yes 00:20:27.237 Atomic Boundary Size (Normal): 0 00:20:27.237 Atomic Boundary Size (PFail): 0 00:20:27.237 Atomic Boundary Offset: 0 00:20:27.237 Maximum Single Source Range Length: 65535 00:20:27.237 Maximum Copy Length: 65535 00:20:27.237 Maximum Source Range Count: 1 00:20:27.237 NGUID/EUI64 Never Reused: No 00:20:27.237 Namespace Write Protected: No 00:20:27.237 Number of LBA Formats: 1 00:20:27.237 Current LBA Format: LBA Format #00 00:20:27.237 LBA Format #00: Data Size: 512 Metadata Size: 0 00:20:27.237 00:20:27.237 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:20:27.237 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@561 -- # xtrace_disable 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:27.238 rmmod nvme_tcp 00:20:27.238 rmmod nvme_fabrics 00:20:27.238 rmmod nvme_keyring 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 2926977 ']' 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 2926977 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@950 -- # '[' -z 2926977 ']' 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # kill -0 2926977 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # uname 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2926977 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2926977' 00:20:27.238 killing process with pid 2926977 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@969 -- # kill 2926977 00:20:27.238 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@974 -- # wait 2926977 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:27.808 12:21:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:29.716 00:20:29.716 real 0m5.428s 00:20:29.716 user 0m4.389s 00:20:29.716 sys 0m1.847s 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:20:29.716 ************************************ 00:20:29.716 END TEST nvmf_identify 00:20:29.716 ************************************ 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:29.716 ************************************ 00:20:29.716 START TEST nvmf_perf 00:20:29.716 ************************************ 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:20:29.716 * Looking for test storage... 00:20:29.716 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:29.716 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:20:29.717 12:21:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:31.626 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:31.626 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:31.626 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:31.626 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:31.626 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:31.885 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:31.886 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:31.886 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.213 ms 00:20:31.886 00:20:31.886 --- 10.0.0.2 ping statistics --- 00:20:31.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.886 rtt min/avg/max/mdev = 0.213/0.213/0.213/0.000 ms 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:31.886 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:31.886 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.125 ms 00:20:31.886 00:20:31.886 --- 10.0.0.1 ping statistics --- 00:20:31.886 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:31.886 rtt min/avg/max/mdev = 0.125/0.125/0.125/0.000 ms 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=2928959 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 2928959 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@831 -- # '[' -z 2928959 ']' 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:31.886 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:31.886 12:21:24 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:31.886 [2024-07-26 12:21:24.987678] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:31.886 [2024-07-26 12:21:24.987758] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:31.886 EAL: No free 2048 kB hugepages reported on node 1 00:20:31.886 [2024-07-26 12:21:25.056743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:32.144 [2024-07-26 12:21:25.174140] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:32.144 [2024-07-26 12:21:25.174200] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:32.144 [2024-07-26 12:21:25.174216] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:32.144 [2024-07-26 12:21:25.174229] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:32.144 [2024-07-26 12:21:25.174240] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:32.144 [2024-07-26 12:21:25.174324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:32.144 [2024-07-26 12:21:25.174382] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:32.144 [2024-07-26 12:21:25.174502] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:32.144 [2024-07-26 12:21:25.174505] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # return 0 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:20:32.710 12:21:25 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:20:36.026 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:20:36.026 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:20:36.284 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:88:00.0 00:20:36.284 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:20:36.542 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:20:36.542 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:88:00.0 ']' 00:20:36.542 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:20:36.542 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:20:36.542 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:20:36.800 [2024-07-26 12:21:29.820529] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:36.800 12:21:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:37.058 12:21:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:37.058 12:21:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:20:37.316 12:21:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:20:37.316 12:21:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:20:37.574 12:21:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:37.832 [2024-07-26 12:21:30.836266] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:37.832 12:21:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:38.090 12:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:88:00.0 ']' 00:20:38.090 12:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:38.090 12:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:20:38.090 12:21:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:88:00.0' 00:20:39.030 Initializing NVMe Controllers 00:20:39.030 Attached to NVMe Controller at 0000:88:00.0 [8086:0a54] 00:20:39.030 Associating PCIE (0000:88:00.0) NSID 1 with lcore 0 00:20:39.030 Initialization complete. Launching workers. 00:20:39.030 ======================================================== 00:20:39.030 Latency(us) 00:20:39.030 Device Information : IOPS MiB/s Average min max 00:20:39.030 PCIE (0000:88:00.0) NSID 1 from core 0: 85937.63 335.69 371.90 10.74 8255.04 00:20:39.030 ======================================================== 00:20:39.030 Total : 85937.63 335.69 371.90 10.74 8255.04 00:20:39.030 00:20:39.289 12:21:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:39.289 EAL: No free 2048 kB hugepages reported on node 1 00:20:40.668 Initializing NVMe Controllers 00:20:40.668 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:40.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:40.668 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:40.668 Initialization complete. Launching workers. 00:20:40.668 ======================================================== 00:20:40.668 Latency(us) 00:20:40.668 Device Information : IOPS MiB/s Average min max 00:20:40.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 127.93 0.50 7957.64 179.15 44890.62 00:20:40.668 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 59.97 0.23 17339.90 4970.32 51875.32 00:20:40.668 ======================================================== 00:20:40.668 Total : 187.90 0.73 10951.98 179.15 51875.32 00:20:40.668 00:20:40.668 12:21:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:40.668 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.041 Initializing NVMe Controllers 00:20:42.041 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:42.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:42.041 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:42.041 Initialization complete. Launching workers. 00:20:42.041 ======================================================== 00:20:42.041 Latency(us) 00:20:42.041 Device Information : IOPS MiB/s Average min max 00:20:42.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 8456.92 33.03 3784.12 433.37 8426.52 00:20:42.041 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3850.96 15.04 8354.50 6029.54 16023.52 00:20:42.041 ======================================================== 00:20:42.041 Total : 12307.88 48.08 5214.13 433.37 16023.52 00:20:42.041 00:20:42.041 12:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:20:42.041 12:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:20:42.041 12:21:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:42.041 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.576 Initializing NVMe Controllers 00:20:44.576 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.576 Controller IO queue size 128, less than required. 00:20:44.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.576 Controller IO queue size 128, less than required. 00:20:44.576 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:44.576 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:44.576 Initialization complete. Launching workers. 00:20:44.576 ======================================================== 00:20:44.576 Latency(us) 00:20:44.576 Device Information : IOPS MiB/s Average min max 00:20:44.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1115.90 278.97 117349.59 71311.25 213858.30 00:20:44.576 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 596.44 149.11 223405.34 84217.89 368113.35 00:20:44.576 ======================================================== 00:20:44.576 Total : 1712.34 428.09 154291.05 71311.25 368113.35 00:20:44.576 00:20:44.576 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:20:44.576 EAL: No free 2048 kB hugepages reported on node 1 00:20:44.834 No valid NVMe controllers or AIO or URING devices found 00:20:44.834 Initializing NVMe Controllers 00:20:44.834 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:44.834 Controller IO queue size 128, less than required. 00:20:44.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.834 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:20:44.834 Controller IO queue size 128, less than required. 00:20:44.834 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:44.834 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:20:44.834 WARNING: Some requested NVMe devices were skipped 00:20:44.834 12:21:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:20:44.834 EAL: No free 2048 kB hugepages reported on node 1 00:20:47.366 Initializing NVMe Controllers 00:20:47.366 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:47.366 Controller IO queue size 128, less than required. 00:20:47.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.366 Controller IO queue size 128, less than required. 00:20:47.366 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:20:47.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:20:47.366 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:20:47.366 Initialization complete. Launching workers. 00:20:47.366 00:20:47.366 ==================== 00:20:47.366 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:20:47.366 TCP transport: 00:20:47.366 polls: 29984 00:20:47.366 idle_polls: 10859 00:20:47.366 sock_completions: 19125 00:20:47.366 nvme_completions: 4837 00:20:47.366 submitted_requests: 7254 00:20:47.366 queued_requests: 1 00:20:47.366 00:20:47.366 ==================== 00:20:47.366 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:20:47.366 TCP transport: 00:20:47.366 polls: 30331 00:20:47.366 idle_polls: 15238 00:20:47.366 sock_completions: 15093 00:20:47.366 nvme_completions: 2545 00:20:47.366 submitted_requests: 3816 00:20:47.366 queued_requests: 1 00:20:47.366 ======================================================== 00:20:47.366 Latency(us) 00:20:47.366 Device Information : IOPS MiB/s Average min max 00:20:47.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1208.98 302.25 109485.12 61439.01 150134.94 00:20:47.366 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 635.99 159.00 206429.00 78290.71 317833.54 00:20:47.366 ======================================================== 00:20:47.366 Total : 1844.97 461.24 142903.17 61439.01 317833.54 00:20:47.366 00:20:47.366 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:20:47.366 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:47.625 rmmod nvme_tcp 00:20:47.625 rmmod nvme_fabrics 00:20:47.625 rmmod nvme_keyring 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 2928959 ']' 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 2928959 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@950 -- # '[' -z 2928959 ']' 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # kill -0 2928959 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # uname 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2928959 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2928959' 00:20:47.625 killing process with pid 2928959 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@969 -- # kill 2928959 00:20:47.625 12:21:40 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@974 -- # wait 2928959 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:49.528 12:21:42 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:51.436 00:20:51.436 real 0m21.649s 00:20:51.436 user 1m4.977s 00:20:51.436 sys 0m5.254s 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:20:51.436 ************************************ 00:20:51.436 END TEST nvmf_perf 00:20:51.436 ************************************ 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:20:51.436 ************************************ 00:20:51.436 START TEST nvmf_fio_host 00:20:51.436 ************************************ 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:20:51.436 * Looking for test storage... 00:20:51.436 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:51.436 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:20:51.437 12:21:44 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:20:53.989 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:20:53.989 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:20:53.989 Found net devices under 0000:0a:00.0: cvl_0_0 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:20:53.989 Found net devices under 0000:0a:00.1: cvl_0_1 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:53.989 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:53.990 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:53.990 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.138 ms 00:20:53.990 00:20:53.990 --- 10.0.0.2 ping statistics --- 00:20:53.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.990 rtt min/avg/max/mdev = 0.138/0.138/0.138/0.000 ms 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:53.990 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:53.990 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.126 ms 00:20:53.990 00:20:53.990 --- 10.0.0.1 ping statistics --- 00:20:53.990 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:53.990 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=2933025 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 2933025 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@831 -- # '[' -z 2933025 ']' 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:53.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:20:53.990 12:21:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:53.990 [2024-07-26 12:21:46.803277] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:20:53.990 [2024-07-26 12:21:46.803369] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:53.990 EAL: No free 2048 kB hugepages reported on node 1 00:20:53.990 [2024-07-26 12:21:46.865794] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:53.990 [2024-07-26 12:21:46.972096] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:53.990 [2024-07-26 12:21:46.972162] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:53.990 [2024-07-26 12:21:46.972175] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:53.990 [2024-07-26 12:21:46.972187] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:53.990 [2024-07-26 12:21:46.972212] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:53.990 [2024-07-26 12:21:46.972272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:53.990 [2024-07-26 12:21:46.972334] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:53.990 [2024-07-26 12:21:46.972401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:53.990 [2024-07-26 12:21:46.972403] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:53.990 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:20:53.990 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # return 0 00:20:53.990 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:20:54.260 [2024-07-26 12:21:47.312919] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:54.260 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:20:54.260 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:20:54.260 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:20:54.260 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:20:54.517 Malloc1 00:20:54.517 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:54.775 12:21:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:55.033 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:55.290 [2024-07-26 12:21:48.364086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:55.291 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:20:55.548 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:20:55.548 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:55.548 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:55.548 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:55.548 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:55.548 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:55.549 12:21:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:20:55.808 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:20:55.808 fio-3.35 00:20:55.808 Starting 1 thread 00:20:55.808 EAL: No free 2048 kB hugepages reported on node 1 00:20:58.337 00:20:58.337 test: (groupid=0, jobs=1): err= 0: pid=2933384: Fri Jul 26 12:21:51 2024 00:20:58.337 read: IOPS=9038, BW=35.3MiB/s (37.0MB/s)(70.8MiB/2006msec) 00:20:58.337 slat (nsec): min=1905, max=161259, avg=2465.15, stdev=1996.31 00:20:58.337 clat (usec): min=2584, max=13631, avg=7821.45, stdev=608.58 00:20:58.337 lat (usec): min=2613, max=13633, avg=7823.91, stdev=608.47 00:20:58.337 clat percentiles (usec): 00:20:58.337 | 1.00th=[ 6456], 5.00th=[ 6849], 10.00th=[ 7111], 20.00th=[ 7373], 00:20:58.337 | 30.00th=[ 7504], 40.00th=[ 7701], 50.00th=[ 7832], 60.00th=[ 7963], 00:20:58.337 | 70.00th=[ 8094], 80.00th=[ 8291], 90.00th=[ 8586], 95.00th=[ 8717], 00:20:58.337 | 99.00th=[ 9241], 99.50th=[ 9372], 99.90th=[11863], 99.95th=[12649], 00:20:58.337 | 99.99th=[13304] 00:20:58.337 bw ( KiB/s): min=35488, max=36496, per=99.91%, avg=36122.00, stdev=440.77, samples=4 00:20:58.337 iops : min= 8872, max= 9124, avg=9030.50, stdev=110.19, samples=4 00:20:58.337 write: IOPS=9056, BW=35.4MiB/s (37.1MB/s)(71.0MiB/2006msec); 0 zone resets 00:20:58.337 slat (usec): min=2, max=123, avg= 2.58, stdev= 1.43 00:20:58.337 clat (usec): min=1415, max=12488, avg=6292.04, stdev=523.78 00:20:58.337 lat (usec): min=1424, max=12490, avg=6294.62, stdev=523.73 00:20:58.337 clat percentiles (usec): 00:20:58.337 | 1.00th=[ 5145], 5.00th=[ 5538], 10.00th=[ 5669], 20.00th=[ 5932], 00:20:58.337 | 30.00th=[ 6063], 40.00th=[ 6194], 50.00th=[ 6325], 60.00th=[ 6390], 00:20:58.337 | 70.00th=[ 6521], 80.00th=[ 6718], 90.00th=[ 6915], 95.00th=[ 7046], 00:20:58.337 | 99.00th=[ 7439], 99.50th=[ 7570], 99.90th=[ 9896], 99.95th=[10945], 00:20:58.337 | 99.99th=[12518] 00:20:58.337 bw ( KiB/s): min=35728, max=36472, per=99.99%, avg=36220.00, stdev=343.69, samples=4 00:20:58.337 iops : min= 8932, max= 9118, avg=9055.00, stdev=85.92, samples=4 00:20:58.337 lat (msec) : 2=0.02%, 4=0.11%, 10=99.75%, 20=0.12% 00:20:58.337 cpu : usr=56.71%, sys=37.71%, ctx=64, majf=0, minf=40 00:20:58.337 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:20:58.337 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:20:58.337 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:20:58.337 issued rwts: total=18131,18167,0,0 short=0,0,0,0 dropped=0,0,0,0 00:20:58.337 latency : target=0, window=0, percentile=100.00%, depth=128 00:20:58.337 00:20:58.337 Run status group 0 (all jobs): 00:20:58.337 READ: bw=35.3MiB/s (37.0MB/s), 35.3MiB/s-35.3MiB/s (37.0MB/s-37.0MB/s), io=70.8MiB (74.3MB), run=2006-2006msec 00:20:58.337 WRITE: bw=35.4MiB/s (37.1MB/s), 35.4MiB/s-35.4MiB/s (37.1MB/s-37.1MB/s), io=71.0MiB (74.4MB), run=2006-2006msec 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:58.337 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:20:58.338 12:21:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:20:58.338 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:20:58.338 fio-3.35 00:20:58.338 Starting 1 thread 00:20:58.338 EAL: No free 2048 kB hugepages reported on node 1 00:21:00.877 00:21:00.877 test: (groupid=0, jobs=1): err= 0: pid=2933722: Fri Jul 26 12:21:53 2024 00:21:00.877 read: IOPS=6830, BW=107MiB/s (112MB/s)(215MiB/2012msec) 00:21:00.877 slat (usec): min=2, max=109, avg= 3.59, stdev= 1.71 00:21:00.877 clat (usec): min=3294, max=22891, avg=10577.25, stdev=2645.17 00:21:00.877 lat (usec): min=3298, max=22894, avg=10580.84, stdev=2645.18 00:21:00.877 clat percentiles (usec): 00:21:00.877 | 1.00th=[ 5014], 5.00th=[ 6390], 10.00th=[ 7242], 20.00th=[ 8455], 00:21:00.877 | 30.00th=[ 9241], 40.00th=[ 9896], 50.00th=[10552], 60.00th=[11076], 00:21:00.877 | 70.00th=[11600], 80.00th=[12518], 90.00th=[13960], 95.00th=[15401], 00:21:00.877 | 99.00th=[17695], 99.50th=[18744], 99.90th=[22414], 99.95th=[22414], 00:21:00.877 | 99.99th=[22676] 00:21:00.877 bw ( KiB/s): min=43712, max=73184, per=50.55%, avg=55240.00, stdev=12593.79, samples=4 00:21:00.877 iops : min= 2732, max= 4574, avg=3452.50, stdev=787.11, samples=4 00:21:00.877 write: IOPS=3921, BW=61.3MiB/s (64.3MB/s)(113MiB/1852msec); 0 zone resets 00:21:00.877 slat (usec): min=30, max=125, avg=33.41, stdev= 4.74 00:21:00.877 clat (usec): min=7314, max=30062, avg=14540.96, stdev=3535.96 00:21:00.877 lat (usec): min=7350, max=30094, avg=14574.37, stdev=3535.83 00:21:00.877 clat percentiles (usec): 00:21:00.877 | 1.00th=[ 8029], 5.00th=[ 9110], 10.00th=[ 9896], 20.00th=[11076], 00:21:00.877 | 30.00th=[12125], 40.00th=[13304], 50.00th=[14615], 60.00th=[15926], 00:21:00.877 | 70.00th=[16909], 80.00th=[17695], 90.00th=[19006], 95.00th=[19792], 00:21:00.877 | 99.00th=[22414], 99.50th=[23987], 99.90th=[28443], 99.95th=[28967], 00:21:00.877 | 99.99th=[30016] 00:21:00.877 bw ( KiB/s): min=47136, max=76320, per=91.62%, avg=57488.00, stdev=12881.06, samples=4 00:21:00.877 iops : min= 2946, max= 4770, avg=3593.00, stdev=805.07, samples=4 00:21:00.877 lat (msec) : 4=0.08%, 10=30.64%, 20=67.64%, 50=1.65% 00:21:00.877 cpu : usr=66.19%, sys=28.54%, ctx=41, majf=0, minf=60 00:21:00.877 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.5% 00:21:00.877 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:21:00.877 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:21:00.877 issued rwts: total=13742,7263,0,0 short=0,0,0,0 dropped=0,0,0,0 00:21:00.877 latency : target=0, window=0, percentile=100.00%, depth=128 00:21:00.877 00:21:00.877 Run status group 0 (all jobs): 00:21:00.877 READ: bw=107MiB/s (112MB/s), 107MiB/s-107MiB/s (112MB/s-112MB/s), io=215MiB (225MB), run=2012-2012msec 00:21:00.877 WRITE: bw=61.3MiB/s (64.3MB/s), 61.3MiB/s-61.3MiB/s (64.3MB/s-64.3MB/s), io=113MiB (119MB), run=1852-1852msec 00:21:00.877 12:21:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:00.877 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:00.877 rmmod nvme_tcp 00:21:01.136 rmmod nvme_fabrics 00:21:01.136 rmmod nvme_keyring 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 2933025 ']' 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 2933025 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@950 -- # '[' -z 2933025 ']' 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # kill -0 2933025 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # uname 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2933025 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2933025' 00:21:01.136 killing process with pid 2933025 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@969 -- # kill 2933025 00:21:01.136 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@974 -- # wait 2933025 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:01.394 12:21:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.302 12:21:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:03.302 00:21:03.302 real 0m12.002s 00:21:03.302 user 0m34.445s 00:21:03.302 sys 0m4.380s 00:21:03.302 12:21:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:03.302 12:21:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.302 ************************************ 00:21:03.302 END TEST nvmf_fio_host 00:21:03.302 ************************************ 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:03.562 ************************************ 00:21:03.562 START TEST nvmf_failover 00:21:03.562 ************************************ 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:21:03.562 * Looking for test storage... 00:21:03.562 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:03.562 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.563 12:21:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:05.467 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:05.468 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:05.468 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:05.468 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:05.468 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:05.468 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:05.468 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:21:05.468 00:21:05.468 --- 10.0.0.2 ping statistics --- 00:21:05.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.468 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:05.468 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:05.468 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.105 ms 00:21:05.468 00:21:05.468 --- 10.0.0.1 ping statistics --- 00:21:05.468 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:05.468 rtt min/avg/max/mdev = 0.105/0.105/0.105/0.000 ms 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:05.468 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=2935912 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 2935912 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2935912 ']' 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:05.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:05.727 12:21:58 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.727 [2024-07-26 12:21:58.771238] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:21:05.727 [2024-07-26 12:21:58.771332] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:05.727 EAL: No free 2048 kB hugepages reported on node 1 00:21:05.727 [2024-07-26 12:21:58.835453] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:05.727 [2024-07-26 12:21:58.950982] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:05.727 [2024-07-26 12:21:58.951033] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:05.727 [2024-07-26 12:21:58.951076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:05.727 [2024-07-26 12:21:58.951088] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:05.727 [2024-07-26 12:21:58.951099] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:05.727 [2024-07-26 12:21:58.951224] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:05.727 [2024-07-26 12:21:58.951297] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:05.727 [2024-07-26 12:21:58.951294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:05.985 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:06.243 [2024-07-26 12:21:59.303207] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:06.243 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:06.501 Malloc0 00:21:06.501 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:06.759 12:21:59 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:07.017 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:07.274 [2024-07-26 12:22:00.323930] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:07.274 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:07.532 [2024-07-26 12:22:00.572701] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:07.532 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:07.791 [2024-07-26 12:22:00.829577] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=2936200 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 2936200 /var/tmp/bdevperf.sock 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2936200 ']' 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:07.791 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:07.791 12:22:00 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:08.049 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:08.049 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:08.049 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:08.616 NVMe0n1 00:21:08.616 12:22:01 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:08.875 00:21:08.875 12:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=2936332 00:21:08.875 12:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:08.875 12:22:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:21:09.815 12:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:10.075 [2024-07-26 12:22:03.228779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228902] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228919] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228932] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228958] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228970] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228983] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.228996] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.229020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.229032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.229046] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.229064] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.075 [2024-07-26 12:22:03.229079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229204] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229216] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229229] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229241] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229303] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229434] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229459] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 [2024-07-26 12:22:03.229482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144af40 is same with the state(5) to be set 00:21:10.076 12:22:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:21:13.370 12:22:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:13.370 00:21:13.629 12:22:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:13.889 [2024-07-26 12:22:06.888029] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888131] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888174] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888276] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888289] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.889 [2024-07-26 12:22:06.888301] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888325] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888404] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888416] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888477] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888489] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888499] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888611] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888623] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888634] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888645] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888656] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888671] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888738] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888776] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888788] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 [2024-07-26 12:22:06.888800] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144bd10 is same with the state(5) to be set 00:21:13.890 12:22:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:21:17.178 12:22:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:17.178 [2024-07-26 12:22:10.145725] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:17.178 12:22:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:21:18.116 12:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:18.375 [2024-07-26 12:22:11.437573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437723] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437735] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437747] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437759] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437771] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437783] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437795] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437823] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437835] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437860] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437872] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.375 [2024-07-26 12:22:11.437884] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437897] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437909] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437920] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437931] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437943] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437955] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437966] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437977] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437988] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.437999] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438010] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438021] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438042] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438129] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438141] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438153] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438169] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 [2024-07-26 12:22:11.438193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x144cab0 is same with the state(5) to be set 00:21:18.376 12:22:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 2936332 00:21:24.944 0 00:21:24.944 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 2936200 00:21:24.944 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2936200 ']' 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2936200 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2936200 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2936200' 00:21:24.945 killing process with pid 2936200 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2936200 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2936200 00:21:24.945 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:24.945 [2024-07-26 12:22:00.893541] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:21:24.945 [2024-07-26 12:22:00.893633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2936200 ] 00:21:24.945 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.945 [2024-07-26 12:22:00.952432] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.945 [2024-07-26 12:22:01.061572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:24.945 Running I/O for 15 seconds... 00:21:24.945 [2024-07-26 12:22:03.230325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:71880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:71888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:71896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:71904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:71912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:71920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:71928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:71936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:71944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:71952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:71960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:71968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:71976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:71984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:71992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:72000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:72008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:72016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:72024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:72032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.230978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:72040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.230991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:72048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:72056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:72064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:72072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:72080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:72088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:72096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.945 [2024-07-26 12:22:03.231237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:72104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.945 [2024-07-26 12:22:03.231251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:72112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:72120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:72128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:72136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:72144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:72152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:72160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:72168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:72176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:72184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:72192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:72200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:72216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:72224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:72232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:72240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:72248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.946 [2024-07-26 12:22:03.231784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:72296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:72304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:72312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:72320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:72328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:72336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:72344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.231980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.231995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:72352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:72360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:72368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:72376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:72384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:72392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:72400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:72408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:72416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:72424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.946 [2024-07-26 12:22:03.232314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:72432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.946 [2024-07-26 12:22:03.232329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:72440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:72448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:72456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:72464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:72472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:72480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:72488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:72496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:72504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:72512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:72520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:72528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:72536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:72544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:72552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:72560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:72568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:72576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:72584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:72592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:72600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:72608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.232981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:72616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.232994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:72624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:72632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:72640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:72648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:72656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:72664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:72672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.947 [2024-07-26 12:22:03.233221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233257] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.947 [2024-07-26 12:22:03.233275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72680 len:8 PRP1 0x0 PRP2 0x0 00:21:24.947 [2024-07-26 12:22:03.233289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.947 [2024-07-26 12:22:03.233320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.947 [2024-07-26 12:22:03.233332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72688 len:8 PRP1 0x0 PRP2 0x0 00:21:24.947 [2024-07-26 12:22:03.233345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233374] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.947 [2024-07-26 12:22:03.233385] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.947 [2024-07-26 12:22:03.233396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72696 len:8 PRP1 0x0 PRP2 0x0 00:21:24.947 [2024-07-26 12:22:03.233413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.947 [2024-07-26 12:22:03.233427] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.947 [2024-07-26 12:22:03.233438] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.947 [2024-07-26 12:22:03.233449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72704 len:8 PRP1 0x0 PRP2 0x0 00:21:24.947 [2024-07-26 12:22:03.233462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233474] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233485] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72712 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233521] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233532] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72720 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233569] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233579] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72728 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233616] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233627] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72736 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233663] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233674] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72744 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233711] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233721] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72752 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233770] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72760 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233810] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233821] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72768 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233858] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233869] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72776 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233906] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233916] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72784 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.233953] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.233964] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.233975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72792 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.233988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234000] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234011] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72800 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234071] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72808 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234124] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234135] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72816 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234174] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72824 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72832 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234279] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234290] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72840 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234328] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234339] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72848 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234392] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72856 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234439] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234450] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72864 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234486] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234497] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72872 len:8 PRP1 0x0 PRP2 0x0 00:21:24.948 [2024-07-26 12:22:03.234529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.948 [2024-07-26 12:22:03.234543] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.948 [2024-07-26 12:22:03.234554] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.948 [2024-07-26 12:22:03.234566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72880 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.234578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.234594] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.234606] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.234617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72888 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.234630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.234642] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.234655] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.234666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:72896 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.234678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.234692] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.249133] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.249164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72256 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.249180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249196] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.249208] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.249219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72264 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.249231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249244] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.249269] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.249281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72272 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.249294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.249319] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.249331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72280 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.249343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249356] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.949 [2024-07-26 12:22:03.249367] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.949 [2024-07-26 12:22:03.249378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:72288 len:8 PRP1 0x0 PRP2 0x0 00:21:24.949 [2024-07-26 12:22:03.249392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249463] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc46c10 was disconnected and freed. reset controller. 00:21:24.949 [2024-07-26 12:22:03.249482] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:24.949 [2024-07-26 12:22:03.249529] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.949 [2024-07-26 12:22:03.249548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249565] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.949 [2024-07-26 12:22:03.249579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.949 [2024-07-26 12:22:03.249607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249621] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.949 [2024-07-26 12:22:03.249635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:03.249648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.949 [2024-07-26 12:22:03.249701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc290f0 (9): Bad file descriptor 00:21:24.949 [2024-07-26 12:22:03.253007] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.949 [2024-07-26 12:22:03.288117] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.949 [2024-07-26 12:22:06.890356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:63448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:63456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:63464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:63472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:63480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:63488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:63496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:63504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:63512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.949 [2024-07-26 12:22:06.890682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:63520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.949 [2024-07-26 12:22:06.890696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:63528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:63536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:63544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:63552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:63560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:63568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:63576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.950 [2024-07-26 12:22:06.890889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.950 [2024-07-26 12:22:06.890904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:63584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.890917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.890932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:63592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.890944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.890959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:63600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.890972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.890990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:63608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:63320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:63328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:63336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:63344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:63352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:63360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:63368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:63376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.951 [2024-07-26 12:22:06.891264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:63616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:63624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:63632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:63640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:63648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:63656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:63664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:63672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:63680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:63688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:63696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:63704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:63712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:63720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:63728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:63736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:63744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:63752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:63760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:63768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:63776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:63784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:63792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.891974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.891989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:63800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.892003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.892018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:63808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.951 [2024-07-26 12:22:06.892032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.951 [2024-07-26 12:22:06.892047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:63816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:63824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:63832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:63840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:63848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:63856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:63864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:63872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:63880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:63888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:63896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:63904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:63384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:63392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:63400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:63408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:63416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:63424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:63432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:63440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.952 [2024-07-26 12:22:06.892641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:63912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:63920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:63928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:63936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:63944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:63952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:63960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:63968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:63976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:63984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:63992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.892983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:64000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.892997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.893013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:64008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.893028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.893044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:64016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.893064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.893082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:64024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.893097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.893112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:64032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.893127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.893143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:64040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.893157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.952 [2024-07-26 12:22:06.893173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:64048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.952 [2024-07-26 12:22:06.893187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:64056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:64064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:64072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:64080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:64088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:64104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:64112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:64120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:64128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:64136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:64144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:64152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:64160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:64168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:64176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:64184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:64192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:64200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:64208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:64216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:64224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:64232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:64240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:64248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:64256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.893971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.893987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:64264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.894002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:64272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.894032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:64280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.894069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:64288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.953 [2024-07-26 12:22:06.894109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894139] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.953 [2024-07-26 12:22:06.894155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64296 len:8 PRP1 0x0 PRP2 0x0 00:21:24.953 [2024-07-26 12:22:06.894169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894187] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.953 [2024-07-26 12:22:06.894199] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.953 [2024-07-26 12:22:06.894212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64304 len:8 PRP1 0x0 PRP2 0x0 00:21:24.953 [2024-07-26 12:22:06.894225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894239] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.953 [2024-07-26 12:22:06.894251] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.953 [2024-07-26 12:22:06.894262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64312 len:8 PRP1 0x0 PRP2 0x0 00:21:24.953 [2024-07-26 12:22:06.894275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.953 [2024-07-26 12:22:06.894288] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.953 [2024-07-26 12:22:06.894300] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.953 [2024-07-26 12:22:06.894318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64320 len:8 PRP1 0x0 PRP2 0x0 00:21:24.953 [2024-07-26 12:22:06.894332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894345] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.954 [2024-07-26 12:22:06.894357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.954 [2024-07-26 12:22:06.894369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64328 len:8 PRP1 0x0 PRP2 0x0 00:21:24.954 [2024-07-26 12:22:06.894382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894395] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.954 [2024-07-26 12:22:06.894406] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.954 [2024-07-26 12:22:06.894417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:64336 len:8 PRP1 0x0 PRP2 0x0 00:21:24.954 [2024-07-26 12:22:06.894431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894491] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc57d40 was disconnected and freed. reset controller. 00:21:24.954 [2024-07-26 12:22:06.894509] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:21:24.954 [2024-07-26 12:22:06.894542] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.954 [2024-07-26 12:22:06.894561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894582] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.954 [2024-07-26 12:22:06.894596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.954 [2024-07-26 12:22:06.894624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.954 [2024-07-26 12:22:06.894651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:06.894664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.954 [2024-07-26 12:22:06.897926] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.954 [2024-07-26 12:22:06.897966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc290f0 (9): Bad file descriptor 00:21:24.954 [2024-07-26 12:22:06.935379] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.954 [2024-07-26 12:22:11.439434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:106528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:106544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:106560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:106568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:106576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:106584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:106592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:106600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:106608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:106616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:106624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:106632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:106640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:106648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:106664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.439982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.439997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:106672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:106680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:106688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:106696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:106704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:106720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:106728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.954 [2024-07-26 12:22:11.440267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.954 [2024-07-26 12:22:11.440283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:106736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:106744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:106752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:106760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:106768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:106776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:106784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:106792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:106800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:106808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:106464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.955 [2024-07-26 12:22:11.440615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:106816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:106824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:106832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:106840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:106848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:106864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:106872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:106880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:106896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:106904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:106912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.440977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.440992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:106920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:106928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:106936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:106944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:106952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:106960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:106968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.955 [2024-07-26 12:22:11.441214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:106976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.955 [2024-07-26 12:22:11.441228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:106984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:106992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:107000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:107008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:107016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:107024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:107032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:107040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:107048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:107056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:107064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:107072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:107080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:107088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:107096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:107104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:107112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:107120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:107128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:107136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:107144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.441829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:106472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.441857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:106480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.441886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:106488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.441915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:106496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.441943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:106504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.441975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.441990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:106512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.442003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:106520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:24.956 [2024-07-26 12:22:11.442032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:107152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:107160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:107168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:107176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:107184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:107192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:107200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:107208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.956 [2024-07-26 12:22:11.442294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.956 [2024-07-26 12:22:11.442310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:107216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:107224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:107232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:107240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:107248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:107256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:107264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:107272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:107280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:107288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:107296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:107304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:107312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:107320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:107328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:107336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:107344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:107352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:107360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:107368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:107376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:107384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:107392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.442985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:107400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.442998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:107408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:107416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:107424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:107432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:107440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:107448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:107456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:107464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:107472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:21:24.957 [2024-07-26 12:22:11.443302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443332] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:21:24.957 [2024-07-26 12:22:11.443347] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:21:24.957 [2024-07-26 12:22:11.443360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:107480 len:8 PRP1 0x0 PRP2 0x0 00:21:24.957 [2024-07-26 12:22:11.443388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443447] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xc59b40 was disconnected and freed. reset controller. 00:21:24.957 [2024-07-26 12:22:11.443464] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:21:24.957 [2024-07-26 12:22:11.443512] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.957 [2024-07-26 12:22:11.443531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.957 [2024-07-26 12:22:11.443546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.958 [2024-07-26 12:22:11.443560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.958 [2024-07-26 12:22:11.443574] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.958 [2024-07-26 12:22:11.443587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.958 [2024-07-26 12:22:11.443601] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:24.958 [2024-07-26 12:22:11.443614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:24.958 [2024-07-26 12:22:11.443628] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:24.958 [2024-07-26 12:22:11.443665] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xc290f0 (9): Bad file descriptor 00:21:24.958 [2024-07-26 12:22:11.446934] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:24.958 [2024-07-26 12:22:11.479775] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:24.958 00:21:24.958 Latency(us) 00:21:24.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.958 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:24.958 Verification LBA range: start 0x0 length 0x4000 00:21:24.958 NVMe0n1 : 15.01 8301.97 32.43 258.76 0.00 14920.62 794.93 29903.83 00:21:24.958 =================================================================================================================== 00:21:24.958 Total : 8301.97 32.43 258.76 0.00 14920.62 794.93 29903.83 00:21:24.958 Received shutdown signal, test time was about 15.000000 seconds 00:21:24.958 00:21:24.958 Latency(us) 00:21:24.958 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:24.958 =================================================================================================================== 00:21:24.958 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=2938176 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 2938176 /var/tmp/bdevperf.sock 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@831 -- # '[' -z 2938176 ']' 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.958 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # return 0 00:21:24.958 12:22:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:24.958 [2024-07-26 12:22:18.014678] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:24.958 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:21:25.216 [2024-07-26 12:22:18.315573] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:21:25.216 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:25.474 NVMe0n1 00:21:25.474 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:25.731 00:21:25.731 12:22:18 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.298 00:21:26.298 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:26.298 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:21:26.557 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:26.816 12:22:19 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:21:30.105 12:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:30.105 12:22:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:21:30.105 12:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=2938852 00:21:30.105 12:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:30.105 12:22:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 2938852 00:21:31.506 0 00:21:31.506 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:31.506 [2024-07-26 12:22:17.514218] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:21:31.506 [2024-07-26 12:22:17.514312] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2938176 ] 00:21:31.506 EAL: No free 2048 kB hugepages reported on node 1 00:21:31.506 [2024-07-26 12:22:17.573627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:31.506 [2024-07-26 12:22:17.678410] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:31.506 [2024-07-26 12:22:19.956570] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:21:31.506 [2024-07-26 12:22:19.956673] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.506 [2024-07-26 12:22:19.956696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-07-26 12:22:19.956713] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.506 [2024-07-26 12:22:19.956743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-07-26 12:22:19.956758] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.506 [2024-07-26 12:22:19.956772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-07-26 12:22:19.956788] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:31.506 [2024-07-26 12:22:19.956803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:31.506 [2024-07-26 12:22:19.956817] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:31.506 [2024-07-26 12:22:19.956874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:31.506 [2024-07-26 12:22:19.956920] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1de90f0 (9): Bad file descriptor 00:21:31.506 [2024-07-26 12:22:20.059328] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:21:31.506 Running I/O for 1 seconds... 00:21:31.506 00:21:31.506 Latency(us) 00:21:31.506 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.506 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:21:31.506 Verification LBA range: start 0x0 length 0x4000 00:21:31.506 NVMe0n1 : 1.00 8408.74 32.85 0.00 0.00 15160.15 658.39 18641.35 00:21:31.506 =================================================================================================================== 00:21:31.506 Total : 8408.74 32.85 0.00 0.00 15160.15 658.39 18641.35 00:21:31.506 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.506 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:21:31.506 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:31.764 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:31.764 12:22:24 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:21:32.022 12:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:32.281 12:22:25 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 2938176 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2938176 ']' 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2938176 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2938176 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2938176' 00:21:35.573 killing process with pid 2938176 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2938176 00:21:35.573 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2938176 00:21:35.832 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:21:35.832 12:22:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:36.091 rmmod nvme_tcp 00:21:36.091 rmmod nvme_fabrics 00:21:36.091 rmmod nvme_keyring 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 2935912 ']' 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 2935912 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@950 -- # '[' -z 2935912 ']' 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # kill -0 2935912 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # uname 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2935912 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2935912' 00:21:36.091 killing process with pid 2935912 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@969 -- # kill 2935912 00:21:36.091 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@974 -- # wait 2935912 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:36.658 12:22:29 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:38.565 00:21:38.565 real 0m35.060s 00:21:38.565 user 2m3.831s 00:21:38.565 sys 0m5.772s 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 ************************************ 00:21:38.565 END TEST nvmf_failover 00:21:38.565 ************************************ 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:38.565 ************************************ 00:21:38.565 START TEST nvmf_host_discovery 00:21:38.565 ************************************ 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:21:38.565 * Looking for test storage... 00:21:38.565 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:21:38.565 12:22:31 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:40.470 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:40.470 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:40.470 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:40.471 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:40.471 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:40.471 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:40.729 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:40.729 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:21:40.729 00:21:40.729 --- 10.0.0.2 ping statistics --- 00:21:40.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.729 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:40.729 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:40.729 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.131 ms 00:21:40.729 00:21:40.729 --- 10.0.0.1 ping statistics --- 00:21:40.729 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:40.729 rtt min/avg/max/mdev = 0.131/0.131/0.131/0.000 ms 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=2941459 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 2941459 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2941459 ']' 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:40.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:40.729 12:22:33 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:40.729 [2024-07-26 12:22:33.871136] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:21:40.729 [2024-07-26 12:22:33.871219] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:40.729 EAL: No free 2048 kB hugepages reported on node 1 00:21:40.729 [2024-07-26 12:22:33.944071] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:40.989 [2024-07-26 12:22:34.064384] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:40.989 [2024-07-26 12:22:34.064443] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:40.989 [2024-07-26 12:22:34.064456] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:40.989 [2024-07-26 12:22:34.064467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:40.989 [2024-07-26 12:22:34.064476] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:40.989 [2024-07-26 12:22:34.064508] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 [2024-07-26 12:22:34.884184] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 [2024-07-26 12:22:34.892336] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 null0 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 null1 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=2941612 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 2941612 /tmp/host.sock 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@831 -- # '[' -z 2941612 ']' 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:21:41.924 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:41.924 12:22:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:41.924 [2024-07-26 12:22:34.968840] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:21:41.924 [2024-07-26 12:22:34.968920] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2941612 ] 00:21:41.924 EAL: No free 2048 kB hugepages reported on node 1 00:21:41.924 [2024-07-26 12:22:35.026396] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:41.924 [2024-07-26 12:22:35.139133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # return 0 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.183 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.441 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 [2024-07-26 12:22:35.558152] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:42.442 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:42.699 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:42.699 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == \n\v\m\e\0 ]] 00:21:42.699 12:22:35 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:43.264 [2024-07-26 12:22:36.323941] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:43.264 [2024-07-26 12:22:36.323973] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:43.264 [2024-07-26 12:22:36.323999] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:43.264 [2024-07-26 12:22:36.452439] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:21:43.264 [2024-07-26 12:22:36.513334] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:43.264 [2024-07-26 12:22:36.513355] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:43.521 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:43.779 12:22:36 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:44.712 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.970 12:22:37 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.970 [2024-07-26 12:22:38.041497] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:44.970 [2024-07-26 12:22:38.042143] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:44.970 [2024-07-26 12:22:38.042209] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:44.970 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:44.971 [2024-07-26 12:22:38.170253] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:21:44.971 12:22:38 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:45.228 [2024-07-26 12:22:38.268958] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:45.228 [2024-07-26 12:22:38.268983] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:21:45.228 [2024-07-26 12:22:38.268993] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.164 [2024-07-26 12:22:39.262294] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:21:46.164 [2024-07-26 12:22:39.262325] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:21:46.164 [2024-07-26 12:22:39.267237] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.164 [2024-07-26 12:22:39.267270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.164 [2024-07-26 12:22:39.267296] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.164 [2024-07-26 12:22:39.267309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:46.164 [2024-07-26 12:22:39.267324] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.164 [2024-07-26 12:22:39.267338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.164 [2024-07-26 12:22:39.267368] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.164 [2024-07-26 12:22:39.267381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.164 [2024-07-26 12:22:39.267394] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:46.164 [2024-07-26 12:22:39.277244] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.164 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.164 [2024-07-26 12:22:39.287284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.165 [2024-07-26 12:22:39.287555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.165 [2024-07-26 12:22:39.287587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.165 [2024-07-26 12:22:39.287612] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.165 [2024-07-26 12:22:39.287638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.165 [2024-07-26 12:22:39.287675] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.165 [2024-07-26 12:22:39.287695] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.165 [2024-07-26 12:22:39.287711] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.165 [2024-07-26 12:22:39.287734] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.165 [2024-07-26 12:22:39.297370] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.165 [2024-07-26 12:22:39.297599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.165 [2024-07-26 12:22:39.297629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.165 [2024-07-26 12:22:39.297647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.165 [2024-07-26 12:22:39.297671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.165 [2024-07-26 12:22:39.297694] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.165 [2024-07-26 12:22:39.297709] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.165 [2024-07-26 12:22:39.297724] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.165 [2024-07-26 12:22:39.297759] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:46.165 [2024-07-26 12:22:39.307445] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.165 [2024-07-26 12:22:39.307638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.165 [2024-07-26 12:22:39.307667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.165 [2024-07-26 12:22:39.307683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.165 [2024-07-26 12:22:39.307711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.165 [2024-07-26 12:22:39.307732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.165 [2024-07-26 12:22:39.307746] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.165 [2024-07-26 12:22:39.307760] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.165 [2024-07-26 12:22:39.307804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.165 [2024-07-26 12:22:39.317520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.165 [2024-07-26 12:22:39.317690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.165 [2024-07-26 12:22:39.317720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.165 [2024-07-26 12:22:39.317736] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.165 [2024-07-26 12:22:39.317758] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.165 [2024-07-26 12:22:39.317777] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.165 [2024-07-26 12:22:39.317791] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.165 [2024-07-26 12:22:39.317804] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.165 [2024-07-26 12:22:39.317836] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.165 [2024-07-26 12:22:39.327593] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.165 [2024-07-26 12:22:39.327866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.165 [2024-07-26 12:22:39.327894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.165 [2024-07-26 12:22:39.327910] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.165 [2024-07-26 12:22:39.327932] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.165 [2024-07-26 12:22:39.327964] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.165 [2024-07-26 12:22:39.327981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.165 [2024-07-26 12:22:39.327995] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.165 [2024-07-26 12:22:39.328014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.165 [2024-07-26 12:22:39.337663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.165 [2024-07-26 12:22:39.337888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.165 [2024-07-26 12:22:39.337915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.165 [2024-07-26 12:22:39.337931] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.165 [2024-07-26 12:22:39.337953] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.165 [2024-07-26 12:22:39.337973] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.165 [2024-07-26 12:22:39.337992] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.165 [2024-07-26 12:22:39.338007] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.165 [2024-07-26 12:22:39.338075] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:46.165 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:46.166 [2024-07-26 12:22:39.347734] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:21:46.166 [2024-07-26 12:22:39.347950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.166 [2024-07-26 12:22:39.347980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2dc20 with addr=10.0.0.2, port=4420 00:21:46.166 [2024-07-26 12:22:39.347998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2dc20 is same with the state(5) to be set 00:21:46.166 [2024-07-26 12:22:39.348022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c2dc20 (9): Bad file descriptor 00:21:46.166 [2024-07-26 12:22:39.348082] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:21:46.166 [2024-07-26 12:22:39.348118] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:21:46.166 [2024-07-26 12:22:39.348132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:21:46.166 [2024-07-26 12:22:39.348152] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.166 [2024-07-26 12:22:39.349077] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:21:46.166 [2024-07-26 12:22:39.349120] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:46.166 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:46.166 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:21:46.166 12:22:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # sleep 1 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_paths nvme0 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:21:47.567 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ 4421 == \4\4\2\1 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_subsystem_names 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_bdev_list 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # [[ '' == '' ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # local max=10 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # (( max-- )) 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # get_notification_count 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@917 -- # (( notification_count == expected_count )) 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # return 0 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:47.568 12:22:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.501 [2024-07-26 12:22:41.666914] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:21:48.501 [2024-07-26 12:22:41.666945] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:21:48.501 [2024-07-26 12:22:41.666971] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:21:48.759 [2024-07-26 12:22:41.796423] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:21:48.760 [2024-07-26 12:22:41.901855] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:21:48.760 [2024-07-26 12:22:41.901894] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.760 request: 00:21:48.760 { 00:21:48.760 "name": "nvme", 00:21:48.760 "trtype": "tcp", 00:21:48.760 "traddr": "10.0.0.2", 00:21:48.760 "adrfam": "ipv4", 00:21:48.760 "trsvcid": "8009", 00:21:48.760 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:48.760 "wait_for_attach": true, 00:21:48.760 "method": "bdev_nvme_start_discovery", 00:21:48.760 "req_id": 1 00:21:48.760 } 00:21:48.760 Got JSON-RPC error response 00:21:48.760 response: 00:21:48.760 { 00:21:48.760 "code": -17, 00:21:48.760 "message": "File exists" 00:21:48.760 } 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:48.760 12:22:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:48.760 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.018 request: 00:21:49.018 { 00:21:49.018 "name": "nvme_second", 00:21:49.018 "trtype": "tcp", 00:21:49.018 "traddr": "10.0.0.2", 00:21:49.018 "adrfam": "ipv4", 00:21:49.018 "trsvcid": "8009", 00:21:49.018 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:49.018 "wait_for_attach": true, 00:21:49.018 "method": "bdev_nvme_start_discovery", 00:21:49.018 "req_id": 1 00:21:49.018 } 00:21:49.018 Got JSON-RPC error response 00:21:49.018 response: 00:21:49.018 { 00:21:49.018 "code": -17, 00:21:49.018 "message": "File exists" 00:21:49.018 } 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # local es=0 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:49.018 12:22:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:49.951 [2024-07-26 12:22:43.114129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:49.951 [2024-07-26 12:22:43.114201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2c230 with addr=10.0.0.2, port=8010 00:21:49.951 [2024-07-26 12:22:43.114233] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:49.951 [2024-07-26 12:22:43.114249] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:49.951 [2024-07-26 12:22:43.114263] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:50.883 [2024-07-26 12:22:44.116392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:50.883 [2024-07-26 12:22:44.116431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1c2c230 with addr=10.0.0.2, port=8010 00:21:50.883 [2024-07-26 12:22:44.116455] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:21:50.883 [2024-07-26 12:22:44.116469] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:21:50.883 [2024-07-26 12:22:44.116493] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:21:52.256 [2024-07-26 12:22:45.118617] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:21:52.256 request: 00:21:52.256 { 00:21:52.256 "name": "nvme_second", 00:21:52.256 "trtype": "tcp", 00:21:52.256 "traddr": "10.0.0.2", 00:21:52.256 "adrfam": "ipv4", 00:21:52.256 "trsvcid": "8010", 00:21:52.256 "hostnqn": "nqn.2021-12.io.spdk:test", 00:21:52.256 "wait_for_attach": false, 00:21:52.256 "attach_timeout_ms": 3000, 00:21:52.256 "method": "bdev_nvme_start_discovery", 00:21:52.256 "req_id": 1 00:21:52.256 } 00:21:52.256 Got JSON-RPC error response 00:21:52.256 response: 00:21:52.256 { 00:21:52.256 "code": -110, 00:21:52.256 "message": "Connection timed out" 00:21:52.256 } 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@653 -- # es=1 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@561 -- # xtrace_disable 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 2941612 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:52.256 rmmod nvme_tcp 00:21:52.256 rmmod nvme_fabrics 00:21:52.256 rmmod nvme_keyring 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 2941459 ']' 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 2941459 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@950 -- # '[' -z 2941459 ']' 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # kill -0 2941459 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # uname 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2941459 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2941459' 00:21:52.256 killing process with pid 2941459 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@969 -- # kill 2941459 00:21:52.256 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@974 -- # wait 2941459 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:52.515 12:22:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:54.417 00:21:54.417 real 0m15.858s 00:21:54.417 user 0m23.999s 00:21:54.417 sys 0m2.956s 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1126 -- # xtrace_disable 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:21:54.417 ************************************ 00:21:54.417 END TEST nvmf_host_discovery 00:21:54.417 ************************************ 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:54.417 ************************************ 00:21:54.417 START TEST nvmf_host_multipath_status 00:21:54.417 ************************************ 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:21:54.417 * Looking for test storage... 00:21:54.417 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:54.417 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:21:54.418 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:54.676 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:21:54.677 12:22:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:56.578 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:21:56.579 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:21:56.579 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:21:56.579 Found net devices under 0000:0a:00.0: cvl_0_0 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:21:56.579 Found net devices under 0000:0a:00.1: cvl_0_1 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:56.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:56.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.239 ms 00:21:56.579 00:21:56.579 --- 10.0.0.2 ping statistics --- 00:21:56.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.579 rtt min/avg/max/mdev = 0.239/0.239/0.239/0.000 ms 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:56.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:56.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:21:56.579 00:21:56.579 --- 10.0.0.1 ping statistics --- 00:21:56.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:56.579 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@724 -- # xtrace_disable 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=2944917 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 2944917 00:21:56.579 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2944917 ']' 00:21:56.580 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:56.580 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:56.580 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:56.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:56.580 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:56.580 12:22:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:56.580 [2024-07-26 12:22:49.830814] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:21:56.580 [2024-07-26 12:22:49.830898] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:56.838 EAL: No free 2048 kB hugepages reported on node 1 00:21:56.838 [2024-07-26 12:22:49.904288] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:21:56.838 [2024-07-26 12:22:50.025677] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:56.838 [2024-07-26 12:22:50.025734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:56.838 [2024-07-26 12:22:50.025761] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:56.838 [2024-07-26 12:22:50.025774] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:56.838 [2024-07-26 12:22:50.025786] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:56.838 [2024-07-26 12:22:50.025866] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.838 [2024-07-26 12:22:50.025873] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@730 -- # xtrace_disable 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=2944917 00:21:57.772 12:22:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:21:58.029 [2024-07-26 12:22:51.045202] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:58.030 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:21:58.288 Malloc0 00:21:58.288 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:21:58.546 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:58.804 12:22:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:59.062 [2024-07-26 12:22:52.074805] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:59.062 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:59.321 [2024-07-26 12:22:52.331581] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=2945327 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 2945327 /var/tmp/bdevperf.sock 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@831 -- # '[' -z 2945327 ']' 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # local max_retries=100 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:59.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # xtrace_disable 00:21:59.321 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:21:59.579 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:21:59.579 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # return 0 00:21:59.579 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:21:59.837 12:22:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:22:00.402 Nvme0n1 00:22:00.402 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:22:00.659 Nvme0n1 00:22:00.659 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:22:00.659 12:22:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:22:03.226 12:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:22:03.226 12:22:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:03.226 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:03.226 12:22:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.600 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:04.858 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:04.858 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:04.858 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:04.858 12:22:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:05.117 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.117 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:05.117 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.117 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:05.375 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.375 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:05.375 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.375 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:05.633 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.633 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:05.633 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:05.633 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:05.892 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:05.892 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:22:05.892 12:22:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:06.150 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:06.408 12:22:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:22:07.342 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:22:07.342 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:07.342 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.342 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:07.600 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:07.600 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:07.600 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.600 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:07.859 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:07.859 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:07.859 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:07.859 12:23:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:08.116 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.116 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:08.116 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.116 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:08.374 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.374 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:08.374 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.374 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:08.632 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.632 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:08.632 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:08.632 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:08.891 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:08.891 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:22:08.891 12:23:01 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:09.149 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:09.407 12:23:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:22:10.338 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:22:10.338 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:10.338 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.338 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:10.596 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:10.596 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:10.596 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.596 12:23:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:10.854 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:10.854 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:10.854 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:10.854 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:11.112 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.112 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:11.112 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.112 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:11.370 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.370 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:11.370 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.370 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:11.628 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.628 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:11.628 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:11.628 12:23:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:11.886 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:11.886 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:22:11.886 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:12.145 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:12.403 12:23:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:22:13.338 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:22:13.338 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:13.338 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.338 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:13.596 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:13.596 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:13.596 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.596 12:23:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:13.854 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:13.854 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:13.854 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:13.854 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:14.112 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.112 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:14.112 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.112 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:14.370 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.370 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:14.370 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.370 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:14.628 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:14.628 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:14.628 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:14.628 12:23:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:14.885 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:14.886 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:22:14.886 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:15.143 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:15.401 12:23:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:22:16.348 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:22:16.348 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:16.348 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.348 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:16.611 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.611 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:16.611 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.611 12:23:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:16.869 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:16.869 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:16.869 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:16.869 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:17.127 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.127 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:17.127 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.127 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:17.385 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:17.385 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:17.385 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.385 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:17.642 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.643 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:17.643 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:17.643 12:23:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:17.900 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:17.900 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:22:17.900 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:22:18.158 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:18.415 12:23:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:22:19.347 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:22:19.347 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:19.347 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.347 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:19.605 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:19.605 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:19.605 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.605 12:23:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:19.862 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:19.862 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:19.862 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:19.862 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:20.121 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.122 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:20.122 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.122 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:20.382 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.382 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:22:20.382 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.382 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:20.640 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:20.640 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:20.640 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:20.640 12:23:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:20.898 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:20.898 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:22:21.156 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:22:21.156 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:22:21.414 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:21.671 12:23:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:22:22.606 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:22:22.606 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:22.606 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:22.606 12:23:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.172 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:23.430 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.430 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:23.430 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.430 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:23.688 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.688 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:23.688 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.688 12:23:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:23.946 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:23.946 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:23.946 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:23.946 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:24.204 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:24.204 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:22:24.204 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:24.463 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:22:24.721 12:23:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:22:25.655 12:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:22:25.655 12:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:22:25.655 12:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.655 12:23:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:25.913 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:25.913 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:25.913 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:25.913 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:26.171 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.430 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:26.688 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.688 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:26.688 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.688 12:23:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:26.946 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:26.946 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:26.946 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:26.946 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:27.205 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:27.205 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:22:27.205 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:27.463 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:22:27.721 12:23:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:22:29.096 12:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:22:29.096 12:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:29.096 12:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.096 12:23:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:29.096 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.096 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:22:29.096 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.096 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:29.355 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.355 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:29.355 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.355 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:29.615 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.615 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:29.615 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.615 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:29.912 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:29.912 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:29.912 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:29.912 12:23:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:30.169 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.169 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:22:30.169 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:30.169 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:30.426 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:30.426 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:22:30.426 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:22:30.684 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:22:30.942 12:23:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:22:31.876 12:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:22:31.876 12:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:22:31.876 12:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:31.876 12:23:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:22:32.135 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.135 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:22:32.135 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.135 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:22:32.393 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:32.393 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:22:32.393 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.393 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:22:32.651 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.651 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:22:32.651 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.651 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:22:32.909 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:32.909 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:22:32.909 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:32.909 12:23:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:22:33.167 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:22:33.167 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:22:33.167 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:22:33.167 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 2945327 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2945327 ']' 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2945327 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2945327 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_2 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_2 = sudo ']' 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2945327' 00:22:33.425 killing process with pid 2945327 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2945327 00:22:33.425 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2945327 00:22:33.425 Connection closed with partial response: 00:22:33.425 00:22:33.425 00:22:33.686 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 2945327 00:22:33.686 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.686 [2024-07-26 12:22:52.397609] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:22:33.686 [2024-07-26 12:22:52.397714] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2945327 ] 00:22:33.686 EAL: No free 2048 kB hugepages reported on node 1 00:22:33.686 [2024-07-26 12:22:52.456790] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:33.686 [2024-07-26 12:22:52.563177] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:33.686 Running I/O for 90 seconds... 00:22:33.686 [2024-07-26 12:23:08.310153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:68632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.686 [2024-07-26 12:23:08.310225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:68656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:68664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:68680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:68688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:68696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:68704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:68712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.686 [2024-07-26 12:23:08.310661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:22:33.686 [2024-07-26 12:23:08.310683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:68728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.310736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:68736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.310773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:68744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.310811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.310848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:68760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.310885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:68768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.310921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:68776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.310937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.311869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:68784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.311897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.311926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:68792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.311945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.311980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:68800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:68808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:68816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:68824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:68832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:68840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:68848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:68856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:68864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:68872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:68880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:68888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:68896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:68904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:68912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:68920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:68928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:68936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:68944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:68952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:68960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:68968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:68976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.687 [2024-07-26 12:23:08.312907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:33.687 [2024-07-26 12:23:08.312930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:68984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.312946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.312970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:68992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.312987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:69000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:69008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:69016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:69024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:69032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:69040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:69056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:69064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:69072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:69080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:69088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:69096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:69112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:69120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:69128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:68640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.688 [2024-07-26 12:23:08.313894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:68648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.688 [2024-07-26 12:23:08.313939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.313964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:69136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.313981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:69144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:69152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:69160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:69168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:69176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:69184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:69192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:69200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:69208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:69216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:69224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:69232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:33.688 [2024-07-26 12:23:08.314565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:69240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.688 [2024-07-26 12:23:08.314582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:69248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:69256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:69272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:69280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:69288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:69296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:69304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.314969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:69312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.314985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:69320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:69328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:69336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:69344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:69352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:69368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:69376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:69392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:69400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:69408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:69416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:69424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:69432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:69440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:69448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.315844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:69456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.315861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:69464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:69472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:69480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:69488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:69496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:69504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:69512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.689 [2024-07-26 12:23:08.316403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:22:33.689 [2024-07-26 12:23:08.316434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:69520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:69528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:69544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:69552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:69560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:69568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:69576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:69584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:69592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:69608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.316969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.316998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:69616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.317015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.317044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:69624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.317083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.317118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:69632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.317136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.317167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:69640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.317184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:08.317215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:69648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:08.317232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:44672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.690 [2024-07-26 12:23:23.946227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:44528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.690 [2024-07-26 12:23:23.946332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:44560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.690 [2024-07-26 12:23:23.946389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:44592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.690 [2024-07-26 12:23:23.946443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:44624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.690 [2024-07-26 12:23:23.946480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:44656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.690 [2024-07-26 12:23:23.946529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:44688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.946565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.946603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:44704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.946619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.947945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:44720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.947972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:44736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:44752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:44768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:44784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:44800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:44816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:44832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:44848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:44864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.690 [2024-07-26 12:23:23.948356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:22:33.690 [2024-07-26 12:23:23.948379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:44880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:44896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:44912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:44928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:44944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:44960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:44976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:44992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.948696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:45008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.948728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:45024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:45040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:45056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:45072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:45088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:45104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:45120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:45136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:45152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.950977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:45168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.950993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:45184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:45200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:45216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:45232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:45248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:44696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.691 [2024-07-26 12:23:23.951246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:44728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:33.691 [2024-07-26 12:23:23.951300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:45272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:45288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:45304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.691 [2024-07-26 12:23:23.951692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:22:33.691 [2024-07-26 12:23:23.951714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:45320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:45336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:45352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:45368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:45384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:45400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:45416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.951956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:22:33.692 [2024-07-26 12:23:23.951983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:45432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:22:33.692 [2024-07-26 12:23:23.952000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:22:33.692 Received shutdown signal, test time was about 32.452974 seconds 00:22:33.692 00:22:33.692 Latency(us) 00:22:33.692 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:22:33.692 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:22:33.692 Verification LBA range: start 0x0 length 0x4000 00:22:33.692 Nvme0n1 : 32.45 7920.35 30.94 0.00 0.00 16134.81 333.75 4026531.84 00:22:33.692 =================================================================================================================== 00:22:33.692 Total : 7920.35 30.94 0.00 0.00 16134.81 333.75 4026531.84 00:22:33.692 12:23:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:33.950 rmmod nvme_tcp 00:22:33.950 rmmod nvme_fabrics 00:22:33.950 rmmod nvme_keyring 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 2944917 ']' 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 2944917 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@950 -- # '[' -z 2944917 ']' 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # kill -0 2944917 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # uname 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2944917 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2944917' 00:22:33.950 killing process with pid 2944917 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@969 -- # kill 2944917 00:22:33.950 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@974 -- # wait 2944917 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:34.209 12:23:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:36.743 00:22:36.743 real 0m41.845s 00:22:36.743 user 2m3.835s 00:22:36.743 sys 0m11.474s 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:22:36.743 ************************************ 00:22:36.743 END TEST nvmf_host_multipath_status 00:22:36.743 ************************************ 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:36.743 ************************************ 00:22:36.743 START TEST nvmf_discovery_remove_ifc 00:22:36.743 ************************************ 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:22:36.743 * Looking for test storage... 00:22:36.743 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:22:36.743 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:22:36.744 12:23:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:38.644 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:38.645 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:38.645 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:38.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:38.645 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:38.645 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:38.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:22:38.645 00:22:38.645 --- 10.0.0.2 ping statistics --- 00:22:38.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.646 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:38.646 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.646 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.217 ms 00:22:38.646 00:22:38.646 --- 10.0.0.1 ping statistics --- 00:22:38.646 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.646 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@724 -- # xtrace_disable 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=2951535 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 2951535 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2951535 ']' 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.646 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.646 12:23:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.646 [2024-07-26 12:23:31.765460] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:22:38.646 [2024-07-26 12:23:31.765535] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.646 EAL: No free 2048 kB hugepages reported on node 1 00:22:38.646 [2024-07-26 12:23:31.827301] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:38.905 [2024-07-26 12:23:31.933388] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.905 [2024-07-26 12:23:31.933438] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.905 [2024-07-26 12:23:31.933466] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.905 [2024-07-26 12:23:31.933478] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.905 [2024-07-26 12:23:31.933488] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.905 [2024-07-26 12:23:31.933514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@730 -- # xtrace_disable 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:38.905 [2024-07-26 12:23:32.089163] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.905 [2024-07-26 12:23:32.097415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:22:38.905 null0 00:22:38.905 [2024-07-26 12:23:32.129274] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=2951554 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 2951554 /tmp/host.sock 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@831 -- # '[' -z 2951554 ']' 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # local rpc_addr=/tmp/host.sock 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # local max_retries=100 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:22:38.905 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # xtrace_disable 00:22:38.905 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.164 [2024-07-26 12:23:32.198268] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:22:39.164 [2024-07-26 12:23:32.198343] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2951554 ] 00:22:39.164 EAL: No free 2048 kB hugepages reported on node 1 00:22:39.164 [2024-07-26 12:23:32.256389] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:39.164 [2024-07-26 12:23:32.364005] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # return 0 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.164 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:39.422 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:39.422 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:22:39.422 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:39.422 12:23:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.355 [2024-07-26 12:23:33.568015] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:40.355 [2024-07-26 12:23:33.568064] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:40.355 [2024-07-26 12:23:33.568089] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:40.613 [2024-07-26 12:23:33.695518] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:22:40.613 [2024-07-26 12:23:33.799354] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:40.613 [2024-07-26 12:23:33.799434] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:40.613 [2024-07-26 12:23:33.799479] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:40.613 [2024-07-26 12:23:33.799501] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:22:40.613 [2024-07-26 12:23:33.799540] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.613 [2024-07-26 12:23:33.805548] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x1a288e0 was disconnected and freed. delete nvme_qpair. 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:22:40.613 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:40.870 12:23:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:41.803 12:23:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:42.737 12:23:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:42.995 12:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:42.995 12:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:42.995 12:23:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:43.928 12:23:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:44.901 12:23:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:46.272 12:23:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:46.272 [2024-07-26 12:23:39.240270] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:22:46.272 [2024-07-26 12:23:39.240360] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.272 [2024-07-26 12:23:39.240383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.272 [2024-07-26 12:23:39.240416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.272 [2024-07-26 12:23:39.240430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.272 [2024-07-26 12:23:39.240444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.272 [2024-07-26 12:23:39.240456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.272 [2024-07-26 12:23:39.240470] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.272 [2024-07-26 12:23:39.240483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.272 [2024-07-26 12:23:39.240496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:46.272 [2024-07-26 12:23:39.240509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:46.272 [2024-07-26 12:23:39.240522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef320 is same with the state(5) to be set 00:22:46.272 [2024-07-26 12:23:39.250286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ef320 (9): Bad file descriptor 00:22:46.272 [2024-07-26 12:23:39.260329] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:47.204 [2024-07-26 12:23:40.301111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:22:47.204 [2024-07-26 12:23:40.301189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ef320 with addr=10.0.0.2, port=4420 00:22:47.204 [2024-07-26 12:23:40.301223] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ef320 is same with the state(5) to be set 00:22:47.204 [2024-07-26 12:23:40.301287] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ef320 (9): Bad file descriptor 00:22:47.204 [2024-07-26 12:23:40.301805] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:22:47.204 [2024-07-26 12:23:40.301860] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:47.204 [2024-07-26 12:23:40.301880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:47.204 [2024-07-26 12:23:40.301900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:47.204 [2024-07-26 12:23:40.301936] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:47.204 [2024-07-26 12:23:40.301956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:22:47.204 12:23:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:48.136 [2024-07-26 12:23:41.304464] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:22:48.136 [2024-07-26 12:23:41.304495] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:22:48.136 [2024-07-26 12:23:41.304510] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:22:48.136 [2024-07-26 12:23:41.304525] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:22:48.136 [2024-07-26 12:23:41.304546] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:22:48.136 [2024-07-26 12:23:41.304582] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:22:48.136 [2024-07-26 12:23:41.304622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.136 [2024-07-26 12:23:41.304645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.136 [2024-07-26 12:23:41.304666] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.136 [2024-07-26 12:23:41.304682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.136 [2024-07-26 12:23:41.304697] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.136 [2024-07-26 12:23:41.304712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.136 [2024-07-26 12:23:41.304727] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.136 [2024-07-26 12:23:41.304742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.136 [2024-07-26 12:23:41.304757] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:22:48.136 [2024-07-26 12:23:41.304772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:48.136 [2024-07-26 12:23:41.304793] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:22:48.136 [2024-07-26 12:23:41.304913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ee780 (9): Bad file descriptor 00:22:48.136 [2024-07-26 12:23:41.305936] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:22:48.136 [2024-07-26 12:23:41.305961] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:48.136 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:48.393 12:23:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:49.326 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:49.326 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:49.326 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:49.326 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:49.326 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:49.326 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:49.327 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:49.327 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:49.327 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:49.327 12:23:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.258 [2024-07-26 12:23:43.363946] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:22:50.258 [2024-07-26 12:23:43.363987] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:22:50.258 [2024-07-26 12:23:43.364015] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:50.258 [2024-07-26 12:23:43.492436] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:22:50.258 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:50.515 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:22:50.515 12:23:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:22:50.515 [2024-07-26 12:23:43.554400] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:22:50.515 [2024-07-26 12:23:43.554470] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:22:50.515 [2024-07-26 12:23:43.554512] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:22:50.515 [2024-07-26 12:23:43.554537] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:22:50.515 [2024-07-26 12:23:43.554552] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:22:50.516 [2024-07-26 12:23:43.561398] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19f5120 was disconnected and freed. delete nvme_qpair. 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@561 -- # xtrace_disable 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 2951554 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2951554 ']' 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2951554 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2951554 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2951554' 00:22:51.449 killing process with pid 2951554 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2951554 00:22:51.449 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2951554 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:51.708 rmmod nvme_tcp 00:22:51.708 rmmod nvme_fabrics 00:22:51.708 rmmod nvme_keyring 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 2951535 ']' 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 2951535 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@950 -- # '[' -z 2951535 ']' 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # kill -0 2951535 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # uname 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2951535 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2951535' 00:22:51.708 killing process with pid 2951535 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@969 -- # kill 2951535 00:22:51.708 12:23:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@974 -- # wait 2951535 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:51.967 12:23:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.504 00:22:54.504 real 0m17.762s 00:22:54.504 user 0m25.647s 00:22:54.504 sys 0m3.090s 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1126 -- # xtrace_disable 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:22:54.504 ************************************ 00:22:54.504 END TEST nvmf_discovery_remove_ifc 00:22:54.504 ************************************ 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.504 ************************************ 00:22:54.504 START TEST nvmf_identify_kernel_target 00:22:54.504 ************************************ 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:22:54.504 * Looking for test storage... 00:22:54.504 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:22:54.504 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.505 12:23:47 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:22:56.411 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.411 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:22:56.412 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:22:56.412 Found net devices under 0000:0a:00.0: cvl_0_0 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:22:56.412 Found net devices under 0000:0a:00.1: cvl_0_1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:56.412 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:56.412 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:22:56.412 00:22:56.412 --- 10.0.0.2 ping statistics --- 00:22:56.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.412 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:56.412 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:56.412 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.133 ms 00:22:56.412 00:22:56.412 --- 10.0.0.1 ping statistics --- 00:22:56.412 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:56.412 rtt min/avg/max/mdev = 0.133/0.133/0.133/0.000 ms 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:22:56.412 12:23:49 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:22:57.348 Waiting for block devices as requested 00:22:57.348 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:22:57.607 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:57.607 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:57.607 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:57.865 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:57.865 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:57.865 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:57.865 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:58.124 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:58.125 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:22:58.125 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:22:58.383 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:22:58.383 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:22:58.383 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:22:58.383 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:22:58.641 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:22:58.641 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:22:58.641 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:22:58.900 No valid GPT data, bailing 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:22:58.900 12:23:51 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:22:58.900 00:22:58.900 Discovery Log Number of Records 2, Generation counter 2 00:22:58.900 =====Discovery Log Entry 0====== 00:22:58.900 trtype: tcp 00:22:58.900 adrfam: ipv4 00:22:58.900 subtype: current discovery subsystem 00:22:58.900 treq: not specified, sq flow control disable supported 00:22:58.900 portid: 1 00:22:58.900 trsvcid: 4420 00:22:58.900 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:22:58.900 traddr: 10.0.0.1 00:22:58.900 eflags: none 00:22:58.900 sectype: none 00:22:58.900 =====Discovery Log Entry 1====== 00:22:58.900 trtype: tcp 00:22:58.900 adrfam: ipv4 00:22:58.900 subtype: nvme subsystem 00:22:58.900 treq: not specified, sq flow control disable supported 00:22:58.900 portid: 1 00:22:58.900 trsvcid: 4420 00:22:58.900 subnqn: nqn.2016-06.io.spdk:testnqn 00:22:58.900 traddr: 10.0.0.1 00:22:58.900 eflags: none 00:22:58.900 sectype: none 00:22:58.900 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:22:58.900 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:22:58.900 EAL: No free 2048 kB hugepages reported on node 1 00:22:58.900 ===================================================== 00:22:58.900 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:58.900 ===================================================== 00:22:58.901 Controller Capabilities/Features 00:22:58.901 ================================ 00:22:58.901 Vendor ID: 0000 00:22:58.901 Subsystem Vendor ID: 0000 00:22:58.901 Serial Number: 7433ca74b119d8340db1 00:22:58.901 Model Number: Linux 00:22:58.901 Firmware Version: 6.7.0-68 00:22:58.901 Recommended Arb Burst: 0 00:22:58.901 IEEE OUI Identifier: 00 00 00 00:22:58.901 Multi-path I/O 00:22:58.901 May have multiple subsystem ports: No 00:22:58.901 May have multiple controllers: No 00:22:58.901 Associated with SR-IOV VF: No 00:22:58.901 Max Data Transfer Size: Unlimited 00:22:58.901 Max Number of Namespaces: 0 00:22:58.901 Max Number of I/O Queues: 1024 00:22:58.901 NVMe Specification Version (VS): 1.3 00:22:58.901 NVMe Specification Version (Identify): 1.3 00:22:58.901 Maximum Queue Entries: 1024 00:22:58.901 Contiguous Queues Required: No 00:22:58.901 Arbitration Mechanisms Supported 00:22:58.901 Weighted Round Robin: Not Supported 00:22:58.901 Vendor Specific: Not Supported 00:22:58.901 Reset Timeout: 7500 ms 00:22:58.901 Doorbell Stride: 4 bytes 00:22:58.901 NVM Subsystem Reset: Not Supported 00:22:58.901 Command Sets Supported 00:22:58.901 NVM Command Set: Supported 00:22:58.901 Boot Partition: Not Supported 00:22:58.901 Memory Page Size Minimum: 4096 bytes 00:22:58.901 Memory Page Size Maximum: 4096 bytes 00:22:58.901 Persistent Memory Region: Not Supported 00:22:58.901 Optional Asynchronous Events Supported 00:22:58.901 Namespace Attribute Notices: Not Supported 00:22:58.901 Firmware Activation Notices: Not Supported 00:22:58.901 ANA Change Notices: Not Supported 00:22:58.901 PLE Aggregate Log Change Notices: Not Supported 00:22:58.901 LBA Status Info Alert Notices: Not Supported 00:22:58.901 EGE Aggregate Log Change Notices: Not Supported 00:22:58.901 Normal NVM Subsystem Shutdown event: Not Supported 00:22:58.901 Zone Descriptor Change Notices: Not Supported 00:22:58.901 Discovery Log Change Notices: Supported 00:22:58.901 Controller Attributes 00:22:58.901 128-bit Host Identifier: Not Supported 00:22:58.901 Non-Operational Permissive Mode: Not Supported 00:22:58.901 NVM Sets: Not Supported 00:22:58.901 Read Recovery Levels: Not Supported 00:22:58.901 Endurance Groups: Not Supported 00:22:58.901 Predictable Latency Mode: Not Supported 00:22:58.901 Traffic Based Keep ALive: Not Supported 00:22:58.901 Namespace Granularity: Not Supported 00:22:58.901 SQ Associations: Not Supported 00:22:58.901 UUID List: Not Supported 00:22:58.901 Multi-Domain Subsystem: Not Supported 00:22:58.901 Fixed Capacity Management: Not Supported 00:22:58.901 Variable Capacity Management: Not Supported 00:22:58.901 Delete Endurance Group: Not Supported 00:22:58.901 Delete NVM Set: Not Supported 00:22:58.901 Extended LBA Formats Supported: Not Supported 00:22:58.901 Flexible Data Placement Supported: Not Supported 00:22:58.901 00:22:58.901 Controller Memory Buffer Support 00:22:58.901 ================================ 00:22:58.901 Supported: No 00:22:58.901 00:22:58.901 Persistent Memory Region Support 00:22:58.901 ================================ 00:22:58.901 Supported: No 00:22:58.901 00:22:58.901 Admin Command Set Attributes 00:22:58.901 ============================ 00:22:58.901 Security Send/Receive: Not Supported 00:22:58.901 Format NVM: Not Supported 00:22:58.901 Firmware Activate/Download: Not Supported 00:22:58.901 Namespace Management: Not Supported 00:22:58.901 Device Self-Test: Not Supported 00:22:58.901 Directives: Not Supported 00:22:58.901 NVMe-MI: Not Supported 00:22:58.901 Virtualization Management: Not Supported 00:22:58.901 Doorbell Buffer Config: Not Supported 00:22:58.901 Get LBA Status Capability: Not Supported 00:22:58.901 Command & Feature Lockdown Capability: Not Supported 00:22:58.901 Abort Command Limit: 1 00:22:58.901 Async Event Request Limit: 1 00:22:58.901 Number of Firmware Slots: N/A 00:22:58.901 Firmware Slot 1 Read-Only: N/A 00:22:58.901 Firmware Activation Without Reset: N/A 00:22:58.901 Multiple Update Detection Support: N/A 00:22:58.901 Firmware Update Granularity: No Information Provided 00:22:58.901 Per-Namespace SMART Log: No 00:22:58.901 Asymmetric Namespace Access Log Page: Not Supported 00:22:58.901 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:58.901 Command Effects Log Page: Not Supported 00:22:58.901 Get Log Page Extended Data: Supported 00:22:58.901 Telemetry Log Pages: Not Supported 00:22:58.901 Persistent Event Log Pages: Not Supported 00:22:58.901 Supported Log Pages Log Page: May Support 00:22:58.901 Commands Supported & Effects Log Page: Not Supported 00:22:58.901 Feature Identifiers & Effects Log Page:May Support 00:22:58.901 NVMe-MI Commands & Effects Log Page: May Support 00:22:58.901 Data Area 4 for Telemetry Log: Not Supported 00:22:58.901 Error Log Page Entries Supported: 1 00:22:58.901 Keep Alive: Not Supported 00:22:58.901 00:22:58.901 NVM Command Set Attributes 00:22:58.901 ========================== 00:22:58.901 Submission Queue Entry Size 00:22:58.901 Max: 1 00:22:58.901 Min: 1 00:22:58.901 Completion Queue Entry Size 00:22:58.901 Max: 1 00:22:58.901 Min: 1 00:22:58.901 Number of Namespaces: 0 00:22:58.901 Compare Command: Not Supported 00:22:58.901 Write Uncorrectable Command: Not Supported 00:22:58.901 Dataset Management Command: Not Supported 00:22:58.901 Write Zeroes Command: Not Supported 00:22:58.901 Set Features Save Field: Not Supported 00:22:58.901 Reservations: Not Supported 00:22:58.901 Timestamp: Not Supported 00:22:58.901 Copy: Not Supported 00:22:58.901 Volatile Write Cache: Not Present 00:22:58.901 Atomic Write Unit (Normal): 1 00:22:58.901 Atomic Write Unit (PFail): 1 00:22:58.901 Atomic Compare & Write Unit: 1 00:22:58.901 Fused Compare & Write: Not Supported 00:22:58.901 Scatter-Gather List 00:22:58.901 SGL Command Set: Supported 00:22:58.901 SGL Keyed: Not Supported 00:22:58.901 SGL Bit Bucket Descriptor: Not Supported 00:22:58.901 SGL Metadata Pointer: Not Supported 00:22:58.901 Oversized SGL: Not Supported 00:22:58.901 SGL Metadata Address: Not Supported 00:22:58.901 SGL Offset: Supported 00:22:58.901 Transport SGL Data Block: Not Supported 00:22:58.901 Replay Protected Memory Block: Not Supported 00:22:58.901 00:22:58.901 Firmware Slot Information 00:22:58.901 ========================= 00:22:58.901 Active slot: 0 00:22:58.901 00:22:58.901 00:22:58.901 Error Log 00:22:58.901 ========= 00:22:58.901 00:22:58.901 Active Namespaces 00:22:58.901 ================= 00:22:58.901 Discovery Log Page 00:22:58.901 ================== 00:22:58.901 Generation Counter: 2 00:22:58.901 Number of Records: 2 00:22:58.901 Record Format: 0 00:22:58.901 00:22:58.901 Discovery Log Entry 0 00:22:58.901 ---------------------- 00:22:58.901 Transport Type: 3 (TCP) 00:22:58.901 Address Family: 1 (IPv4) 00:22:58.901 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:58.901 Entry Flags: 00:22:58.901 Duplicate Returned Information: 0 00:22:58.901 Explicit Persistent Connection Support for Discovery: 0 00:22:58.901 Transport Requirements: 00:22:58.901 Secure Channel: Not Specified 00:22:58.901 Port ID: 1 (0x0001) 00:22:58.901 Controller ID: 65535 (0xffff) 00:22:58.901 Admin Max SQ Size: 32 00:22:58.901 Transport Service Identifier: 4420 00:22:58.901 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:58.901 Transport Address: 10.0.0.1 00:22:58.901 Discovery Log Entry 1 00:22:58.901 ---------------------- 00:22:58.901 Transport Type: 3 (TCP) 00:22:58.901 Address Family: 1 (IPv4) 00:22:58.901 Subsystem Type: 2 (NVM Subsystem) 00:22:58.901 Entry Flags: 00:22:58.901 Duplicate Returned Information: 0 00:22:58.901 Explicit Persistent Connection Support for Discovery: 0 00:22:58.902 Transport Requirements: 00:22:58.902 Secure Channel: Not Specified 00:22:58.902 Port ID: 1 (0x0001) 00:22:58.902 Controller ID: 65535 (0xffff) 00:22:58.902 Admin Max SQ Size: 32 00:22:58.902 Transport Service Identifier: 4420 00:22:58.902 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:22:58.902 Transport Address: 10.0.0.1 00:22:58.902 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:22:59.161 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.161 get_feature(0x01) failed 00:22:59.161 get_feature(0x02) failed 00:22:59.161 get_feature(0x04) failed 00:22:59.161 ===================================================== 00:22:59.161 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:22:59.161 ===================================================== 00:22:59.161 Controller Capabilities/Features 00:22:59.161 ================================ 00:22:59.161 Vendor ID: 0000 00:22:59.161 Subsystem Vendor ID: 0000 00:22:59.161 Serial Number: 78f4ce14cf8c7d41e0f9 00:22:59.161 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:22:59.161 Firmware Version: 6.7.0-68 00:22:59.161 Recommended Arb Burst: 6 00:22:59.161 IEEE OUI Identifier: 00 00 00 00:22:59.161 Multi-path I/O 00:22:59.161 May have multiple subsystem ports: Yes 00:22:59.161 May have multiple controllers: Yes 00:22:59.161 Associated with SR-IOV VF: No 00:22:59.161 Max Data Transfer Size: Unlimited 00:22:59.161 Max Number of Namespaces: 1024 00:22:59.161 Max Number of I/O Queues: 128 00:22:59.161 NVMe Specification Version (VS): 1.3 00:22:59.161 NVMe Specification Version (Identify): 1.3 00:22:59.161 Maximum Queue Entries: 1024 00:22:59.161 Contiguous Queues Required: No 00:22:59.161 Arbitration Mechanisms Supported 00:22:59.161 Weighted Round Robin: Not Supported 00:22:59.161 Vendor Specific: Not Supported 00:22:59.161 Reset Timeout: 7500 ms 00:22:59.161 Doorbell Stride: 4 bytes 00:22:59.161 NVM Subsystem Reset: Not Supported 00:22:59.161 Command Sets Supported 00:22:59.161 NVM Command Set: Supported 00:22:59.161 Boot Partition: Not Supported 00:22:59.161 Memory Page Size Minimum: 4096 bytes 00:22:59.161 Memory Page Size Maximum: 4096 bytes 00:22:59.161 Persistent Memory Region: Not Supported 00:22:59.161 Optional Asynchronous Events Supported 00:22:59.161 Namespace Attribute Notices: Supported 00:22:59.161 Firmware Activation Notices: Not Supported 00:22:59.161 ANA Change Notices: Supported 00:22:59.161 PLE Aggregate Log Change Notices: Not Supported 00:22:59.161 LBA Status Info Alert Notices: Not Supported 00:22:59.161 EGE Aggregate Log Change Notices: Not Supported 00:22:59.161 Normal NVM Subsystem Shutdown event: Not Supported 00:22:59.161 Zone Descriptor Change Notices: Not Supported 00:22:59.161 Discovery Log Change Notices: Not Supported 00:22:59.161 Controller Attributes 00:22:59.161 128-bit Host Identifier: Supported 00:22:59.161 Non-Operational Permissive Mode: Not Supported 00:22:59.161 NVM Sets: Not Supported 00:22:59.161 Read Recovery Levels: Not Supported 00:22:59.161 Endurance Groups: Not Supported 00:22:59.161 Predictable Latency Mode: Not Supported 00:22:59.161 Traffic Based Keep ALive: Supported 00:22:59.161 Namespace Granularity: Not Supported 00:22:59.161 SQ Associations: Not Supported 00:22:59.161 UUID List: Not Supported 00:22:59.161 Multi-Domain Subsystem: Not Supported 00:22:59.161 Fixed Capacity Management: Not Supported 00:22:59.161 Variable Capacity Management: Not Supported 00:22:59.161 Delete Endurance Group: Not Supported 00:22:59.161 Delete NVM Set: Not Supported 00:22:59.161 Extended LBA Formats Supported: Not Supported 00:22:59.161 Flexible Data Placement Supported: Not Supported 00:22:59.161 00:22:59.161 Controller Memory Buffer Support 00:22:59.161 ================================ 00:22:59.161 Supported: No 00:22:59.161 00:22:59.161 Persistent Memory Region Support 00:22:59.161 ================================ 00:22:59.161 Supported: No 00:22:59.161 00:22:59.161 Admin Command Set Attributes 00:22:59.161 ============================ 00:22:59.161 Security Send/Receive: Not Supported 00:22:59.161 Format NVM: Not Supported 00:22:59.161 Firmware Activate/Download: Not Supported 00:22:59.161 Namespace Management: Not Supported 00:22:59.161 Device Self-Test: Not Supported 00:22:59.161 Directives: Not Supported 00:22:59.161 NVMe-MI: Not Supported 00:22:59.161 Virtualization Management: Not Supported 00:22:59.161 Doorbell Buffer Config: Not Supported 00:22:59.161 Get LBA Status Capability: Not Supported 00:22:59.161 Command & Feature Lockdown Capability: Not Supported 00:22:59.161 Abort Command Limit: 4 00:22:59.161 Async Event Request Limit: 4 00:22:59.161 Number of Firmware Slots: N/A 00:22:59.161 Firmware Slot 1 Read-Only: N/A 00:22:59.161 Firmware Activation Without Reset: N/A 00:22:59.161 Multiple Update Detection Support: N/A 00:22:59.161 Firmware Update Granularity: No Information Provided 00:22:59.161 Per-Namespace SMART Log: Yes 00:22:59.161 Asymmetric Namespace Access Log Page: Supported 00:22:59.161 ANA Transition Time : 10 sec 00:22:59.161 00:22:59.161 Asymmetric Namespace Access Capabilities 00:22:59.161 ANA Optimized State : Supported 00:22:59.162 ANA Non-Optimized State : Supported 00:22:59.162 ANA Inaccessible State : Supported 00:22:59.162 ANA Persistent Loss State : Supported 00:22:59.162 ANA Change State : Supported 00:22:59.162 ANAGRPID is not changed : No 00:22:59.162 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:22:59.162 00:22:59.162 ANA Group Identifier Maximum : 128 00:22:59.162 Number of ANA Group Identifiers : 128 00:22:59.162 Max Number of Allowed Namespaces : 1024 00:22:59.162 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:22:59.162 Command Effects Log Page: Supported 00:22:59.162 Get Log Page Extended Data: Supported 00:22:59.162 Telemetry Log Pages: Not Supported 00:22:59.162 Persistent Event Log Pages: Not Supported 00:22:59.162 Supported Log Pages Log Page: May Support 00:22:59.162 Commands Supported & Effects Log Page: Not Supported 00:22:59.162 Feature Identifiers & Effects Log Page:May Support 00:22:59.162 NVMe-MI Commands & Effects Log Page: May Support 00:22:59.162 Data Area 4 for Telemetry Log: Not Supported 00:22:59.162 Error Log Page Entries Supported: 128 00:22:59.162 Keep Alive: Supported 00:22:59.162 Keep Alive Granularity: 1000 ms 00:22:59.162 00:22:59.162 NVM Command Set Attributes 00:22:59.162 ========================== 00:22:59.162 Submission Queue Entry Size 00:22:59.162 Max: 64 00:22:59.162 Min: 64 00:22:59.162 Completion Queue Entry Size 00:22:59.162 Max: 16 00:22:59.162 Min: 16 00:22:59.162 Number of Namespaces: 1024 00:22:59.162 Compare Command: Not Supported 00:22:59.162 Write Uncorrectable Command: Not Supported 00:22:59.162 Dataset Management Command: Supported 00:22:59.162 Write Zeroes Command: Supported 00:22:59.162 Set Features Save Field: Not Supported 00:22:59.162 Reservations: Not Supported 00:22:59.162 Timestamp: Not Supported 00:22:59.162 Copy: Not Supported 00:22:59.162 Volatile Write Cache: Present 00:22:59.162 Atomic Write Unit (Normal): 1 00:22:59.162 Atomic Write Unit (PFail): 1 00:22:59.162 Atomic Compare & Write Unit: 1 00:22:59.162 Fused Compare & Write: Not Supported 00:22:59.162 Scatter-Gather List 00:22:59.162 SGL Command Set: Supported 00:22:59.162 SGL Keyed: Not Supported 00:22:59.162 SGL Bit Bucket Descriptor: Not Supported 00:22:59.162 SGL Metadata Pointer: Not Supported 00:22:59.162 Oversized SGL: Not Supported 00:22:59.162 SGL Metadata Address: Not Supported 00:22:59.162 SGL Offset: Supported 00:22:59.162 Transport SGL Data Block: Not Supported 00:22:59.162 Replay Protected Memory Block: Not Supported 00:22:59.162 00:22:59.162 Firmware Slot Information 00:22:59.162 ========================= 00:22:59.162 Active slot: 0 00:22:59.162 00:22:59.162 Asymmetric Namespace Access 00:22:59.162 =========================== 00:22:59.162 Change Count : 0 00:22:59.162 Number of ANA Group Descriptors : 1 00:22:59.162 ANA Group Descriptor : 0 00:22:59.162 ANA Group ID : 1 00:22:59.162 Number of NSID Values : 1 00:22:59.162 Change Count : 0 00:22:59.162 ANA State : 1 00:22:59.162 Namespace Identifier : 1 00:22:59.162 00:22:59.162 Commands Supported and Effects 00:22:59.162 ============================== 00:22:59.162 Admin Commands 00:22:59.162 -------------- 00:22:59.162 Get Log Page (02h): Supported 00:22:59.162 Identify (06h): Supported 00:22:59.162 Abort (08h): Supported 00:22:59.162 Set Features (09h): Supported 00:22:59.162 Get Features (0Ah): Supported 00:22:59.162 Asynchronous Event Request (0Ch): Supported 00:22:59.162 Keep Alive (18h): Supported 00:22:59.162 I/O Commands 00:22:59.162 ------------ 00:22:59.162 Flush (00h): Supported 00:22:59.162 Write (01h): Supported LBA-Change 00:22:59.162 Read (02h): Supported 00:22:59.162 Write Zeroes (08h): Supported LBA-Change 00:22:59.162 Dataset Management (09h): Supported 00:22:59.162 00:22:59.162 Error Log 00:22:59.162 ========= 00:22:59.162 Entry: 0 00:22:59.162 Error Count: 0x3 00:22:59.162 Submission Queue Id: 0x0 00:22:59.162 Command Id: 0x5 00:22:59.162 Phase Bit: 0 00:22:59.162 Status Code: 0x2 00:22:59.162 Status Code Type: 0x0 00:22:59.162 Do Not Retry: 1 00:22:59.162 Error Location: 0x28 00:22:59.162 LBA: 0x0 00:22:59.162 Namespace: 0x0 00:22:59.162 Vendor Log Page: 0x0 00:22:59.162 ----------- 00:22:59.162 Entry: 1 00:22:59.162 Error Count: 0x2 00:22:59.162 Submission Queue Id: 0x0 00:22:59.162 Command Id: 0x5 00:22:59.162 Phase Bit: 0 00:22:59.162 Status Code: 0x2 00:22:59.162 Status Code Type: 0x0 00:22:59.162 Do Not Retry: 1 00:22:59.162 Error Location: 0x28 00:22:59.162 LBA: 0x0 00:22:59.162 Namespace: 0x0 00:22:59.162 Vendor Log Page: 0x0 00:22:59.162 ----------- 00:22:59.162 Entry: 2 00:22:59.162 Error Count: 0x1 00:22:59.162 Submission Queue Id: 0x0 00:22:59.162 Command Id: 0x4 00:22:59.162 Phase Bit: 0 00:22:59.162 Status Code: 0x2 00:22:59.162 Status Code Type: 0x0 00:22:59.162 Do Not Retry: 1 00:22:59.162 Error Location: 0x28 00:22:59.162 LBA: 0x0 00:22:59.162 Namespace: 0x0 00:22:59.162 Vendor Log Page: 0x0 00:22:59.162 00:22:59.162 Number of Queues 00:22:59.162 ================ 00:22:59.162 Number of I/O Submission Queues: 128 00:22:59.162 Number of I/O Completion Queues: 128 00:22:59.162 00:22:59.162 ZNS Specific Controller Data 00:22:59.162 ============================ 00:22:59.162 Zone Append Size Limit: 0 00:22:59.162 00:22:59.162 00:22:59.162 Active Namespaces 00:22:59.162 ================= 00:22:59.162 get_feature(0x05) failed 00:22:59.162 Namespace ID:1 00:22:59.162 Command Set Identifier: NVM (00h) 00:22:59.162 Deallocate: Supported 00:22:59.162 Deallocated/Unwritten Error: Not Supported 00:22:59.162 Deallocated Read Value: Unknown 00:22:59.162 Deallocate in Write Zeroes: Not Supported 00:22:59.162 Deallocated Guard Field: 0xFFFF 00:22:59.162 Flush: Supported 00:22:59.162 Reservation: Not Supported 00:22:59.162 Namespace Sharing Capabilities: Multiple Controllers 00:22:59.162 Size (in LBAs): 1953525168 (931GiB) 00:22:59.162 Capacity (in LBAs): 1953525168 (931GiB) 00:22:59.162 Utilization (in LBAs): 1953525168 (931GiB) 00:22:59.162 UUID: 4d8617d2-134d-49a6-a1aa-8ee724d72131 00:22:59.162 Thin Provisioning: Not Supported 00:22:59.162 Per-NS Atomic Units: Yes 00:22:59.162 Atomic Boundary Size (Normal): 0 00:22:59.162 Atomic Boundary Size (PFail): 0 00:22:59.162 Atomic Boundary Offset: 0 00:22:59.162 NGUID/EUI64 Never Reused: No 00:22:59.162 ANA group ID: 1 00:22:59.162 Namespace Write Protected: No 00:22:59.162 Number of LBA Formats: 1 00:22:59.162 Current LBA Format: LBA Format #00 00:22:59.162 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:59.162 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:59.162 rmmod nvme_tcp 00:22:59.162 rmmod nvme_fabrics 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:22:59.162 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:59.163 12:23:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:01.077 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:01.336 12:23:54 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:02.714 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:02.714 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:02.714 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:03.650 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:03.650 00:23:03.650 real 0m9.487s 00:23:03.650 user 0m2.020s 00:23:03.650 sys 0m3.388s 00:23:03.650 12:23:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:03.650 12:23:56 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:23:03.650 ************************************ 00:23:03.650 END TEST nvmf_identify_kernel_target 00:23:03.650 ************************************ 00:23:03.650 12:23:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:03.650 12:23:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:03.650 12:23:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:03.650 12:23:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:03.650 ************************************ 00:23:03.650 START TEST nvmf_auth_host 00:23:03.650 ************************************ 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:23:03.651 * Looking for test storage... 00:23:03.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:03.651 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:23:03.910 12:23:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:05.813 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:05.813 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:05.813 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:05.813 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:05.813 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:05.814 12:23:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:05.814 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:05.814 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:23:05.814 00:23:05.814 --- 10.0.0.2 ping statistics --- 00:23:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.814 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:05.814 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:05.814 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.080 ms 00:23:05.814 00:23:05.814 --- 10.0.0.1 ping statistics --- 00:23:05.814 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:05.814 rtt min/avg/max/mdev = 0.080/0.080/0.080/0.000 ms 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=2958709 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 2958709 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2958709 ']' 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:05.814 12:23:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=954867b35c57215f0585a46d299a7cd5 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.LIy 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 954867b35c57215f0585a46d299a7cd5 0 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 954867b35c57215f0585a46d299a7cd5 0 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=954867b35c57215f0585a46d299a7cd5 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.LIy 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.LIy 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.LIy 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f79acb943daffe469fade18580a720dc5aa3cc8a3909ada39f31a65805de0916 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.RyJ 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f79acb943daffe469fade18580a720dc5aa3cc8a3909ada39f31a65805de0916 3 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f79acb943daffe469fade18580a720dc5aa3cc8a3909ada39f31a65805de0916 3 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f79acb943daffe469fade18580a720dc5aa3cc8a3909ada39f31a65805de0916 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.RyJ 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.RyJ 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.RyJ 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.201 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c1d155c5b56db42d544612071e50be24c5ef2b2f0a221a16 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.G2B 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c1d155c5b56db42d544612071e50be24c5ef2b2f0a221a16 0 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c1d155c5b56db42d544612071e50be24c5ef2b2f0a221a16 0 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c1d155c5b56db42d544612071e50be24c5ef2b2f0a221a16 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.G2B 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.G2B 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.G2B 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=fd56fab5629d8fd5a1c8342db44bc6899a80fc08f8913d80 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.mQ4 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key fd56fab5629d8fd5a1c8342db44bc6899a80fc08f8913d80 2 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 fd56fab5629d8fd5a1c8342db44bc6899a80fc08f8913d80 2 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=fd56fab5629d8fd5a1c8342db44bc6899a80fc08f8913d80 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.mQ4 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.mQ4 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.mQ4 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=064306158ff8358b87370bf714a23717 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.ieO 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 064306158ff8358b87370bf714a23717 1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 064306158ff8358b87370bf714a23717 1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=064306158ff8358b87370bf714a23717 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.ieO 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.ieO 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.ieO 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=c13ceae7f1487eda32b116b288e60876 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.3Gj 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key c13ceae7f1487eda32b116b288e60876 1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 c13ceae7f1487eda32b116b288e60876 1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=c13ceae7f1487eda32b116b288e60876 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.3Gj 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.3Gj 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.3Gj 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5f64b5c13bdfcf86ddd5168938f480b98ed2d5350c115dae 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.LMm 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5f64b5c13bdfcf86ddd5168938f480b98ed2d5350c115dae 2 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5f64b5c13bdfcf86ddd5168938f480b98ed2d5350c115dae 2 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5f64b5c13bdfcf86ddd5168938f480b98ed2d5350c115dae 00:23:07.202 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.LMm 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.LMm 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.LMm 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=b3e2ece37f6916f26e86cfc2a656d514 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.MlN 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key b3e2ece37f6916f26e86cfc2a656d514 0 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 b3e2ece37f6916f26e86cfc2a656d514 0 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=b3e2ece37f6916f26e86cfc2a656d514 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:23:07.203 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.MlN 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.MlN 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.MlN 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=8eb37a6819a02b43a3bc0bd9c1a1b17a9361cc07308083489a4ca0c3397e1a14 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.MUc 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 8eb37a6819a02b43a3bc0bd9c1a1b17a9361cc07308083489a4ca0c3397e1a14 3 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 8eb37a6819a02b43a3bc0bd9c1a1b17a9361cc07308083489a4ca0c3397e1a14 3 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=8eb37a6819a02b43a3bc0bd9c1a1b17a9361cc07308083489a4ca0c3397e1a14 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.MUc 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.MUc 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.MUc 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 2958709 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@831 -- # '[' -z 2958709 ']' 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:07.461 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:07.461 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # return 0 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.LIy 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.RyJ ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.RyJ 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.G2B 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.mQ4 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.mQ4 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.ieO 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.3Gj ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.3Gj 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.LMm 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.MlN ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.MlN 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.MUc 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:23:07.721 12:24:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:23:08.657 Waiting for block devices as requested 00:23:08.657 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:23:08.916 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:08.916 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:09.174 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:09.174 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:09.174 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:09.432 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:09.432 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:09.432 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:09.432 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:23:09.691 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:23:09.691 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:23:09.691 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:23:09.691 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:23:09.949 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:23:09.949 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:23:09.949 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:23:10.520 No valid GPT data, bailing 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:23:10.520 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:23:10.520 00:23:10.520 Discovery Log Number of Records 2, Generation counter 2 00:23:10.520 =====Discovery Log Entry 0====== 00:23:10.520 trtype: tcp 00:23:10.520 adrfam: ipv4 00:23:10.520 subtype: current discovery subsystem 00:23:10.520 treq: not specified, sq flow control disable supported 00:23:10.520 portid: 1 00:23:10.520 trsvcid: 4420 00:23:10.520 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:23:10.520 traddr: 10.0.0.1 00:23:10.520 eflags: none 00:23:10.520 sectype: none 00:23:10.520 =====Discovery Log Entry 1====== 00:23:10.520 trtype: tcp 00:23:10.520 adrfam: ipv4 00:23:10.520 subtype: nvme subsystem 00:23:10.520 treq: not specified, sq flow control disable supported 00:23:10.520 portid: 1 00:23:10.520 trsvcid: 4420 00:23:10.520 subnqn: nqn.2024-02.io.spdk:cnode0 00:23:10.520 traddr: 10.0.0.1 00:23:10.520 eflags: none 00:23:10.521 sectype: none 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.521 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.779 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.780 nvme0n1 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.780 12:24:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:10.780 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.038 nvme0n1 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.038 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.039 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.297 nvme0n1 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.297 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.298 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.556 nvme0n1 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:23:11.556 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.557 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.815 nvme0n1 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:11.815 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:11.816 12:24:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:11.816 nvme0n1 00:23:11.816 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.075 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.333 nvme0n1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.333 nvme0n1 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.333 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.591 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.592 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.851 nvme0n1 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:12.851 12:24:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.110 nvme0n1 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.110 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.111 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.369 nvme0n1 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.369 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.370 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.628 nvme0n1 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.628 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.629 12:24:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.888 nvme0n1 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:13.888 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:13.889 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.147 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:14.147 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.147 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 nvme0n1 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.406 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.665 nvme0n1 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.665 12:24:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.953 nvme0n1 00:23:14.953 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:14.953 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:14.953 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:14.953 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:14.953 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.953 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:15.211 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.212 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.778 nvme0n1 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:15.778 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:15.779 12:24:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.345 nvme0n1 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.345 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.912 nvme0n1 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.912 12:24:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:16.912 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.477 nvme0n1 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:17.477 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:17.478 12:24:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.043 nvme0n1 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.043 12:24:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.977 nvme0n1 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:18.977 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:19.236 12:24:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.184 nvme0n1 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:20.184 12:24:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.125 nvme0n1 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:21.125 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:21.126 12:24:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.060 nvme0n1 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.060 12:24:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.997 nvme0n1 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:22.997 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:22.998 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:22.998 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:22.998 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:22.998 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:22.998 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.256 nvme0n1 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:23.256 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.257 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.516 nvme0n1 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.516 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.517 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:23.517 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.517 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.774 nvme0n1 00:23:23.774 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.774 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:23.774 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.774 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:23.775 12:24:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.033 nvme0n1 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.033 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.034 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.293 nvme0n1 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.293 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.552 nvme0n1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.552 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.811 nvme0n1 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:24.811 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:24.812 12:24:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 nvme0n1 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.070 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.071 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.329 nvme0n1 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.329 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.587 nvme0n1 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.587 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.588 12:24:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.846 nvme0n1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:25.846 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.412 nvme0n1 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:23:26.412 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.413 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.671 nvme0n1 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.671 12:24:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.930 nvme0n1 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:26.930 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.188 nvme0n1 00:23:27.188 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.188 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:27.188 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:27.188 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.189 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:27.447 12:24:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.015 nvme0n1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.015 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.622 nvme0n1 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:28.622 12:24:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.190 nvme0n1 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.190 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.757 nvme0n1 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:23:29.757 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:29.758 12:24:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 nvme0n1 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:30.323 12:24:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.256 nvme0n1 00:23:31.256 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.256 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:31.256 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:31.256 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.256 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:31.514 12:24:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.448 nvme0n1 00:23:32.448 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.448 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:32.448 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:32.448 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:32.449 12:24:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.384 nvme0n1 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:33.384 12:24:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 nvme0n1 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.318 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:34.319 12:24:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.692 nvme0n1 00:23:35.692 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 nvme0n1 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.693 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.694 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.694 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:35.694 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.694 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.952 nvme0n1 00:23:35.952 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.952 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:35.952 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.952 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:35.952 12:24:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:35.952 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:35.953 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.212 nvme0n1 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:36.212 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.213 nvme0n1 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.213 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.471 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.472 nvme0n1 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:36.472 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.738 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.738 nvme0n1 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:23:36.739 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.004 12:24:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.004 nvme0n1 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.004 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.263 nvme0n1 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.263 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 nvme0n1 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.522 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:23:37.781 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.782 nvme0n1 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.782 12:24:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:37.782 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:37.782 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:37.782 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:37.782 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:37.782 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.041 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.042 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.300 nvme0n1 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.300 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.301 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.559 nvme0n1 00:23:38.559 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:38.560 12:24:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:38.819 nvme0n1 00:23:38.819 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.078 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.338 nvme0n1 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.338 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.597 nvme0n1 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:39.597 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.598 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:39.856 12:24:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.423 nvme0n1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.423 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.991 nvme0n1 00:23:40.991 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.991 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:40.991 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.991 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.991 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:40.991 12:24:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:40.991 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 nvme0n1 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:41.559 12:24:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.137 nvme0n1 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.137 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 nvme0n1 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:OTU0ODY3YjM1YzU3MjE1ZjA1ODVhNDZkMjk5YTdjZDVl/79t: 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:Zjc5YWNiOTQzZGFmZmU0NjlmYWRlMTg1ODBhNzIwZGM1YWEzY2M4YTM5MDlhZGEzOWYzMWE2NTgwNWRlMDkxNnug5g8=: 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:42.752 12:24:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.687 nvme0n1 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:43.687 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:43.688 12:24:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.620 nvme0n1 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:MDY0MzA2MTU4ZmY4MzU4Yjg3MzcwYmY3MTRhMjM3MTfmMgzM: 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YzEzY2VhZTdmMTQ4N2VkYTMyYjExNmIyODhlNjA4NzanyTWp: 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:44.620 12:24:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.991 nvme0n1 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:NWY2NGI1YzEzYmRmY2Y4NmRkZDUxNjg5MzhmNDgwYjk4ZWQyZDUzNTBjMTE1ZGFlJWKgrw==: 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:YjNlMmVjZTM3ZjY5MTZmMjZlODZjZmMyYTY1NmQ1MTRKI2It: 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:45.991 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:23:45.992 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:45.992 12:24:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.924 nvme0n1 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGViMzdhNjgxOWEwMmI0M2EzYmMwYmQ5YzFhMWIxN2E5MzYxY2MwNzMwODA4MzQ4OWE0Y2EwYzMzOTdlMWExNPLJLpE=: 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:23:46.924 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:46.925 12:24:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 nvme0n1 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:YzFkMTU1YzViNTZkYjQyZDU0NDYxMjA3MWU1MGJlMjRjNWVmMmIyZjBhMjIxYTE2E9+g2Q==: 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZmQ1NmZhYjU2MjlkOGZkNWExYzgzNDJkYjQ0YmM2ODk5YTgwZmMwOGY4OTEzZDgw7e8FlQ==: 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 request: 00:23:47.859 { 00:23:47.859 "name": "nvme0", 00:23:47.859 "trtype": "tcp", 00:23:47.859 "traddr": "10.0.0.1", 00:23:47.859 "adrfam": "ipv4", 00:23:47.859 "trsvcid": "4420", 00:23:47.859 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:47.859 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:47.859 "prchk_reftag": false, 00:23:47.859 "prchk_guard": false, 00:23:47.859 "hdgst": false, 00:23:47.859 "ddgst": false, 00:23:47.859 "method": "bdev_nvme_attach_controller", 00:23:47.859 "req_id": 1 00:23:47.859 } 00:23:47.859 Got JSON-RPC error response 00:23:47.859 response: 00:23:47.859 { 00:23:47.859 "code": -5, 00:23:47.859 "message": "Input/output error" 00:23:47.859 } 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:23:47.859 12:24:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:47.859 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:47.860 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.119 request: 00:23:48.119 { 00:23:48.119 "name": "nvme0", 00:23:48.119 "trtype": "tcp", 00:23:48.119 "traddr": "10.0.0.1", 00:23:48.119 "adrfam": "ipv4", 00:23:48.119 "trsvcid": "4420", 00:23:48.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:48.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:48.119 "prchk_reftag": false, 00:23:48.119 "prchk_guard": false, 00:23:48.119 "hdgst": false, 00:23:48.119 "ddgst": false, 00:23:48.119 "dhchap_key": "key2", 00:23:48.119 "method": "bdev_nvme_attach_controller", 00:23:48.119 "req_id": 1 00:23:48.119 } 00:23:48.119 Got JSON-RPC error response 00:23:48.119 response: 00:23:48.119 { 00:23:48.119 "code": -5, 00:23:48.119 "message": "Input/output error" 00:23:48.119 } 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # local es=0 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:48.119 request: 00:23:48.119 { 00:23:48.119 "name": "nvme0", 00:23:48.119 "trtype": "tcp", 00:23:48.119 "traddr": "10.0.0.1", 00:23:48.119 "adrfam": "ipv4", 00:23:48.119 "trsvcid": "4420", 00:23:48.119 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:23:48.119 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:23:48.119 "prchk_reftag": false, 00:23:48.119 "prchk_guard": false, 00:23:48.119 "hdgst": false, 00:23:48.119 "ddgst": false, 00:23:48.119 "dhchap_key": "key1", 00:23:48.119 "dhchap_ctrlr_key": "ckey2", 00:23:48.119 "method": "bdev_nvme_attach_controller", 00:23:48.119 "req_id": 1 00:23:48.119 } 00:23:48.119 Got JSON-RPC error response 00:23:48.119 response: 00:23:48.119 { 00:23:48.119 "code": -5, 00:23:48.119 "message": "Input/output error" 00:23:48.119 } 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@653 -- # es=1 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:48.119 rmmod nvme_tcp 00:23:48.119 rmmod nvme_fabrics 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:23:48.119 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 2958709 ']' 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 2958709 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@950 -- # '[' -z 2958709 ']' 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # kill -0 2958709 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # uname 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2958709 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2958709' 00:23:48.120 killing process with pid 2958709 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@969 -- # kill 2958709 00:23:48.120 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@974 -- # wait 2958709 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:48.378 12:24:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:23:50.910 12:24:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:51.845 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:51.845 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:23:51.845 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:23:52.780 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:23:52.780 12:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.LIy /tmp/spdk.key-null.G2B /tmp/spdk.key-sha256.ieO /tmp/spdk.key-sha384.LMm /tmp/spdk.key-sha512.MUc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:23:52.780 12:24:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:23:54.157 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:54.157 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:23:54.157 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:54.157 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:54.157 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:54.157 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:54.157 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:54.157 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:54.157 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:54.157 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:23:54.157 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:23:54.158 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:23:54.158 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:23:54.158 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:23:54.158 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:23:54.158 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:23:54.158 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:23:54.158 00:23:54.158 real 0m50.400s 00:23:54.158 user 0m48.441s 00:23:54.158 sys 0m5.805s 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.158 ************************************ 00:23:54.158 END TEST nvmf_auth_host 00:23:54.158 ************************************ 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:54.158 ************************************ 00:23:54.158 START TEST nvmf_digest 00:23:54.158 ************************************ 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:23:54.158 * Looking for test storage... 00:23:54.158 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:23:54.158 12:24:47 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:23:56.061 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:23:56.061 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:23:56.061 Found net devices under 0000:0a:00.0: cvl_0_0 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:23:56.061 Found net devices under 0000:0a:00.1: cvl_0_1 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:56.061 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:56.062 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:56.062 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:56.320 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:56.320 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:23:56.320 00:23:56.320 --- 10.0.0.2 ping statistics --- 00:23:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.320 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:56.320 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:56.320 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.183 ms 00:23:56.320 00:23:56.320 --- 10.0.0.1 ping statistics --- 00:23:56.320 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:56.320 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:23:56.320 ************************************ 00:23:56.320 START TEST nvmf_digest_clean 00:23:56.320 ************************************ 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1125 -- # run_digest 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@724 -- # xtrace_disable 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=2968838 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 2968838 00:23:56.320 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2968838 ']' 00:23:56.321 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:56.321 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.321 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:56.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:56.321 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.321 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.321 [2024-07-26 12:24:49.480707] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:23:56.321 [2024-07-26 12:24:49.480793] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:56.321 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.321 [2024-07-26 12:24:49.542828] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.579 [2024-07-26 12:24:49.650206] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:56.579 [2024-07-26 12:24:49.650254] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:56.579 [2024-07-26 12:24:49.650283] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:56.579 [2024-07-26 12:24:49.650295] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:56.579 [2024-07-26 12:24:49.650305] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:56.579 [2024-07-26 12:24:49.650330] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@730 -- # xtrace_disable 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@561 -- # xtrace_disable 00:23:56.579 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.579 null0 00:23:56.579 [2024-07-26 12:24:49.820270] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:56.837 [2024-07-26 12:24:49.844483] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:56.837 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:23:56.837 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:23:56.837 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:23:56.837 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:23:56.837 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:23:56.837 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2968866 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2968866 /var/tmp/bperf.sock 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2968866 ']' 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:23:56.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:23:56.838 12:24:49 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:23:56.838 [2024-07-26 12:24:49.896124] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:23:56.838 [2024-07-26 12:24:49.896201] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2968866 ] 00:23:56.838 EAL: No free 2048 kB hugepages reported on node 1 00:23:56.838 [2024-07-26 12:24:49.963831] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:56.838 [2024-07-26 12:24:50.087974] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:57.771 12:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:23:57.771 12:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:23:57.771 12:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:23:57.771 12:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:23:57.771 12:24:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:23:58.029 12:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.030 12:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:23:58.632 nvme0n1 00:23:58.632 12:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:23:58.632 12:24:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:23:58.632 Running I/O for 2 seconds... 00:24:00.532 00:24:00.532 Latency(us) 00:24:00.532 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:00.532 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:00.532 nvme0n1 : 2.01 18467.50 72.14 0.00 0.00 6922.91 3737.98 14854.83 00:24:00.532 =================================================================================================================== 00:24:00.532 Total : 18467.50 72.14 0.00 0.00 6922.91 3737.98 14854.83 00:24:00.532 0 00:24:00.532 12:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:00.532 12:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:00.532 12:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:00.532 12:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:00.532 12:24:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:00.532 | select(.opcode=="crc32c") 00:24:00.532 | "\(.module_name) \(.executed)"' 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2968866 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2968866 ']' 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2968866 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2968866 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2968866' 00:24:01.097 killing process with pid 2968866 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2968866 00:24:01.097 Received shutdown signal, test time was about 2.000000 seconds 00:24:01.097 00:24:01.097 Latency(us) 00:24:01.097 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:01.097 =================================================================================================================== 00:24:01.097 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:01.097 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2968866 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2969398 00:24:01.354 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2969398 /var/tmp/bperf.sock 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2969398 ']' 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:01.355 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:01.355 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:01.355 [2024-07-26 12:24:54.445055] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:01.355 [2024-07-26 12:24:54.445139] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969398 ] 00:24:01.355 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:01.355 Zero copy mechanism will not be used. 00:24:01.355 EAL: No free 2048 kB hugepages reported on node 1 00:24:01.355 [2024-07-26 12:24:54.508103] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.613 [2024-07-26 12:24:54.627301] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:01.613 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:01.613 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:01.613 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:01.613 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:01.613 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:01.871 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:01.871 12:24:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:02.436 nvme0n1 00:24:02.436 12:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:02.436 12:24:55 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:02.436 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:02.436 Zero copy mechanism will not be used. 00:24:02.436 Running I/O for 2 seconds... 00:24:04.963 00:24:04.963 Latency(us) 00:24:04.963 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.963 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:04.963 nvme0n1 : 2.00 3337.28 417.16 0.00 0.00 4789.91 4247.70 11068.30 00:24:04.963 =================================================================================================================== 00:24:04.963 Total : 3337.28 417.16 0.00 0.00 4789.91 4247.70 11068.30 00:24:04.963 0 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:04.963 | select(.opcode=="crc32c") 00:24:04.963 | "\(.module_name) \(.executed)"' 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2969398 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2969398 ']' 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2969398 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2969398 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:04.963 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2969398' 00:24:04.963 killing process with pid 2969398 00:24:04.964 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2969398 00:24:04.964 Received shutdown signal, test time was about 2.000000 seconds 00:24:04.964 00:24:04.964 Latency(us) 00:24:04.964 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:04.964 =================================================================================================================== 00:24:04.964 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:04.964 12:24:57 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2969398 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2969815 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2969815 /var/tmp/bperf.sock 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2969815 ']' 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:04.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:04.964 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:05.221 [2024-07-26 12:24:58.230920] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:05.221 [2024-07-26 12:24:58.230996] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2969815 ] 00:24:05.221 EAL: No free 2048 kB hugepages reported on node 1 00:24:05.221 [2024-07-26 12:24:58.295364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:05.221 [2024-07-26 12:24:58.407309] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:05.221 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:05.221 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:05.221 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:05.221 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:05.221 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:05.787 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:05.787 12:24:58 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:06.044 nvme0n1 00:24:06.044 12:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:06.044 12:24:59 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:06.044 Running I/O for 2 seconds... 00:24:08.572 00:24:08.572 Latency(us) 00:24:08.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.572 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:08.572 nvme0n1 : 2.01 20564.73 80.33 0.00 0.00 6209.38 2463.67 12087.75 00:24:08.572 =================================================================================================================== 00:24:08.572 Total : 20564.73 80.33 0.00 0.00 6209.38 2463.67 12087.75 00:24:08.572 0 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:08.572 | select(.opcode=="crc32c") 00:24:08.572 | "\(.module_name) \(.executed)"' 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2969815 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2969815 ']' 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2969815 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2969815 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2969815' 00:24:08.572 killing process with pid 2969815 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2969815 00:24:08.572 Received shutdown signal, test time was about 2.000000 seconds 00:24:08.572 00:24:08.572 Latency(us) 00:24:08.572 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:08.572 =================================================================================================================== 00:24:08.572 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:08.572 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2969815 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=2970340 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 2970340 /var/tmp/bperf.sock 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@831 -- # '[' -z 2970340 ']' 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:08.831 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:08.831 12:25:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:08.831 [2024-07-26 12:25:01.904851] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:08.831 [2024-07-26 12:25:01.904926] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970340 ] 00:24:08.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:08.831 Zero copy mechanism will not be used. 00:24:08.831 EAL: No free 2048 kB hugepages reported on node 1 00:24:08.831 [2024-07-26 12:25:01.965623] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:08.831 [2024-07-26 12:25:02.078392] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.089 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:09.089 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # return 0 00:24:09.089 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:24:09.089 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:24:09.089 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:24:09.347 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.347 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:09.913 nvme0n1 00:24:09.913 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:24:09.913 12:25:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:09.913 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:09.913 Zero copy mechanism will not be used. 00:24:09.913 Running I/O for 2 seconds... 00:24:11.821 00:24:11.821 Latency(us) 00:24:11.821 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:11.821 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:11.821 nvme0n1 : 2.01 1886.34 235.79 0.00 0.00 8458.34 6747.78 17476.27 00:24:11.821 =================================================================================================================== 00:24:11.821 Total : 1886.34 235.79 0.00 0.00 8458.34 6747.78 17476.27 00:24:11.821 0 00:24:11.821 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:24:11.821 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:24:11.821 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:24:11.821 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:24:11.821 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:24:11.821 | select(.opcode=="crc32c") 00:24:11.821 | "\(.module_name) \(.executed)"' 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 2970340 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2970340 ']' 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2970340 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2970340 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2970340' 00:24:12.079 killing process with pid 2970340 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2970340 00:24:12.079 Received shutdown signal, test time was about 2.000000 seconds 00:24:12.079 00:24:12.079 Latency(us) 00:24:12.079 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:12.079 =================================================================================================================== 00:24:12.079 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:12.079 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2970340 00:24:12.337 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 2968838 00:24:12.337 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@950 -- # '[' -z 2968838 ']' 00:24:12.337 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # kill -0 2968838 00:24:12.337 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # uname 00:24:12.337 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:12.337 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2968838 00:24:12.595 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:12.595 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:12.595 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2968838' 00:24:12.595 killing process with pid 2968838 00:24:12.595 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@969 -- # kill 2968838 00:24:12.595 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@974 -- # wait 2968838 00:24:12.854 00:24:12.854 real 0m16.453s 00:24:12.854 user 0m33.239s 00:24:12.854 sys 0m3.970s 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:24:12.854 ************************************ 00:24:12.854 END TEST nvmf_digest_clean 00:24:12.854 ************************************ 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:12.854 ************************************ 00:24:12.854 START TEST nvmf_digest_error 00:24:12.854 ************************************ 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1125 -- # run_digest_error 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=2970779 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 2970779 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2970779 ']' 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:12.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:12.854 12:25:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:12.854 [2024-07-26 12:25:05.985734] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:12.854 [2024-07-26 12:25:05.985808] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:12.854 EAL: No free 2048 kB hugepages reported on node 1 00:24:12.854 [2024-07-26 12:25:06.051000] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.112 [2024-07-26 12:25:06.158219] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:13.112 [2024-07-26 12:25:06.158276] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:13.112 [2024-07-26 12:25:06.158305] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:13.112 [2024-07-26 12:25:06.158316] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:13.112 [2024-07-26 12:25:06.158326] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:13.112 [2024-07-26 12:25:06.158381] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.113 [2024-07-26 12:25:06.222893] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:13.113 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.113 null0 00:24:13.113 [2024-07-26 12:25:06.340390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:13.113 [2024-07-26 12:25:06.364612] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2970918 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2970918 /var/tmp/bperf.sock 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2970918 ']' 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:13.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:13.371 12:25:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:13.371 [2024-07-26 12:25:06.412866] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:13.371 [2024-07-26 12:25:06.412950] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2970918 ] 00:24:13.371 EAL: No free 2048 kB hugepages reported on node 1 00:24:13.371 [2024-07-26 12:25:06.473559] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:13.372 [2024-07-26 12:25:06.589885] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:14.304 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:14.304 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:14.304 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.304 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:14.563 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:14.563 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.563 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.563 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.563 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.563 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:14.847 nvme0n1 00:24:14.847 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:14.847 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:14.847 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:14.847 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:14.847 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:14.847 12:25:07 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:14.847 Running I/O for 2 seconds... 00:24:14.847 [2024-07-26 12:25:08.058183] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:14.847 [2024-07-26 12:25:08.058238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:8880 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.847 [2024-07-26 12:25:08.058262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.847 [2024-07-26 12:25:08.068722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:14.847 [2024-07-26 12:25:08.068769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:8486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.847 [2024-07-26 12:25:08.068786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:14.847 [2024-07-26 12:25:08.083528] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:14.847 [2024-07-26 12:25:08.083559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:1407 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:14.847 [2024-07-26 12:25:08.083582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.098677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.098708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:1013 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.098730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.110299] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.110328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:4133 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.110358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.123307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.123351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:1766 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.123383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.137199] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.137229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:17342 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.137247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.147721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.147748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:10703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.147779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.160790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.160820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:5929 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.160836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.175364] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.175394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.175410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.185967] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.185996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:14529 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.186026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.201008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.201045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:10455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.201085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.213811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.213842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:5891 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.213860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.224500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.224527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.224558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.237645] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.237676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:11333 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.237693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.250508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.250538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:4373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.250569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.262750] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.262778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:7527 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.262809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.276309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.276355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:20267 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.276372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.287868] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.287899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:21369 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.287916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.301905] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.113 [2024-07-26 12:25:08.301939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:7043 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.113 [2024-07-26 12:25:08.301973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.113 [2024-07-26 12:25:08.316360] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.114 [2024-07-26 12:25:08.316392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12410 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.114 [2024-07-26 12:25:08.316410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.114 [2024-07-26 12:25:08.327784] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.114 [2024-07-26 12:25:08.327815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18462 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.114 [2024-07-26 12:25:08.327832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.114 [2024-07-26 12:25:08.342768] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.114 [2024-07-26 12:25:08.342798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:6906 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.114 [2024-07-26 12:25:08.342831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.114 [2024-07-26 12:25:08.353790] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.114 [2024-07-26 12:25:08.353817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:11 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.114 [2024-07-26 12:25:08.353848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.368209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.368238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12499 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.368269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.381705] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.381750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:13209 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.381774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.393968] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.394013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.394030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.407679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.407707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:2950 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.407738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.422242] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.422273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.422301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.433726] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.433756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:7983 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.433772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.449920] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.449950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:2696 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.449971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.462671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.462708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:3518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.462726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.474668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.474700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:4060 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.474726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.489024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.489056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.489082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.500291] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.500324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:18176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.500356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.513328] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.374 [2024-07-26 12:25:08.513369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:9067 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.374 [2024-07-26 12:25:08.513385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.374 [2024-07-26 12:25:08.526787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.526833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:1511 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.526851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.539093] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.539133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4253 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.539150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.552585] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.552614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:18586 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.552630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.563712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.563741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:14393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.563771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.578366] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.578395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:13293 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.578426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.589554] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.589585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:4936 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.589602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.601890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.601921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:24358 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.601938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.375 [2024-07-26 12:25:08.615555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.375 [2024-07-26 12:25:08.615589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:7360 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.375 [2024-07-26 12:25:08.615609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.630025] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.630074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:22708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.630095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.643525] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.643559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13236 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.643593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.659637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.659670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17886 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.659690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.671326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.671354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9352 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.671369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.688026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.688067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.688089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.702600] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.702634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:8481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.702665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.715099] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.715146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:5489 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.715162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.730289] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.730316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:21518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.730347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.748268] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.748297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.748328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.764623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.764657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:24257 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.764676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.635 [2024-07-26 12:25:08.775575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.635 [2024-07-26 12:25:08.775616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:25086 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.635 [2024-07-26 12:25:08.775636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.792255] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.792284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:8889 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.792315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.809357] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.809388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:6101 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.809421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.821737] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.821773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.821791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.836839] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.836874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:2321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.836893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.850644] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.850680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21731 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.850699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.865606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.865641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.865660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.636 [2024-07-26 12:25:08.877117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.636 [2024-07-26 12:25:08.877147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9999 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.636 [2024-07-26 12:25:08.877163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.894564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.894599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14248 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.894619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.906597] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.906631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:7637 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.906649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.921911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.921947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:10631 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.921976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.933981] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.934015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:16848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.934035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.948142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.948171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:47 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.948186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.962741] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.962776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5578 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.962795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.976451] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.976487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:19846 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.976507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:08.989407] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:08.989441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:8426 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:08.989461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.897 [2024-07-26 12:25:09.002316] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.897 [2024-07-26 12:25:09.002360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:1158 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.897 [2024-07-26 12:25:09.002375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.018463] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.018498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:6122 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.018526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.033339] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.033370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:10222 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.033403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.045928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.045962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:7487 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.045981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.060441] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.060490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15353 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.060509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.075796] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.075831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:18512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.075850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.088237] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.088281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:9988 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.088297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.104209] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.104241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.104259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.119678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.119713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:12613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.119732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.133471] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.133505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.133524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:15.898 [2024-07-26 12:25:09.147683] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:15.898 [2024-07-26 12:25:09.147725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:15.898 [2024-07-26 12:25:09.147746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.161510] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.161545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:6861 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.161564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.173762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.173796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.173821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.188616] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.188651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:20736 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.188670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.201785] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.201820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:24186 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.201839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.216636] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.216672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:21881 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.216691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.228890] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.228925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.228944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.244148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.244192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:1068 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.244210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.256810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.256845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:10135 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.256864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.271572] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.271607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:4166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.271626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.287786] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.287826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:18749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.287858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.299742] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.299777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:20325 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.299796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.313144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.313173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:8920 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.313188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.327481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.327516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:23494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.327534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.341918] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.341952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:22550 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.341971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.354016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.354051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.354080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.157 [2024-07-26 12:25:09.369838] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.157 [2024-07-26 12:25:09.369873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:12976 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.157 [2024-07-26 12:25:09.369892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.158 [2024-07-26 12:25:09.384215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.158 [2024-07-26 12:25:09.384247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:3703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.158 [2024-07-26 12:25:09.384288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.158 [2024-07-26 12:25:09.397498] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.158 [2024-07-26 12:25:09.397532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15374 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.158 [2024-07-26 12:25:09.397552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.413078] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.413125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:7694 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.413143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.426925] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.426960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:518 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.426980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.438550] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.438584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:22223 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.438604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.455006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.455040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11904 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.455068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.467606] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.467642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:19425 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.467667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.481573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.481608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:14896 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.481628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.494936] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.494970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.494989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.509720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.509756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:6716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.509775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.522074] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.522114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.522133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.536888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.536923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:2291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.536942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.551088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.551136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10721 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.551153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.564587] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.564621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:17788 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.564640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.579914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.579961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:12420 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.579982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.594814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.594848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:3716 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.594867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.608286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.608324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:22321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.608367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.623286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.623316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:18415 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.623340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.634420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.634448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:20823 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.634464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.650139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.650170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:18314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.650188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.417 [2024-07-26 12:25:09.666206] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.417 [2024-07-26 12:25:09.666237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:12106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.417 [2024-07-26 12:25:09.666254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.680192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.680223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:179 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.680241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.693445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.693480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22376 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.693499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.705781] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.705816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:23321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.705835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.719674] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.719708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:14927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.719727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.735158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.735189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10326 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.735205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.746818] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.746866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:17935 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.746887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.763564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.763600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:7350 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.763619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.777411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.777446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:84 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.777465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.789775] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.789809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16084 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.789828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.803979] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.804014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:5016 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.804033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.818792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.818827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:25176 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.818846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.830480] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.830515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:8022 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.830534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.845836] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.845871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:7580 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.845891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.860007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.860042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.860069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.872318] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.872365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:4708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.872385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.885995] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.886030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:20810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.886049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.900506] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.900548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.900568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.912473] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.912508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:6944 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.912527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.677 [2024-07-26 12:25:09.928022] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.677 [2024-07-26 12:25:09.928057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8625 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.677 [2024-07-26 12:25:09.928087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:09.943677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:09.943715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:17937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:09.943734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:09.954459] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:09.954494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:237 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:09.954513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:09.969462] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:09.969497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:6755 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:09.969517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:09.986065] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:09.986112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:09.986136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:09.997915] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:09.997950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:20859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:09.997969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:10.014830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:10.014885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:4481 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:10.014906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:10.029261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:10.029301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9997 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:10.029320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 [2024-07-26 12:25:10.043322] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x989cb0) 00:24:16.936 [2024-07-26 12:25:10.043386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:22924 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:16.936 [2024-07-26 12:25:10.043420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:16.936 00:24:16.936 Latency(us) 00:24:16.936 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:16.936 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:24:16.936 nvme0n1 : 2.00 18492.90 72.24 0.00 0.00 6911.53 3495.25 22622.06 00:24:16.936 =================================================================================================================== 00:24:16.936 Total : 18492.90 72.24 0.00 0.00 6911.53 3495.25 22622.06 00:24:16.936 0 00:24:16.936 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:16.937 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:16.937 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:16.937 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:16.937 | .driver_specific 00:24:16.937 | .nvme_error 00:24:16.937 | .status_code 00:24:16.937 | .command_transient_transport_error' 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 145 > 0 )) 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2970918 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2970918 ']' 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2970918 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2970918 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2970918' 00:24:17.195 killing process with pid 2970918 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2970918 00:24:17.195 Received shutdown signal, test time was about 2.000000 seconds 00:24:17.195 00:24:17.195 Latency(us) 00:24:17.195 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:17.195 =================================================================================================================== 00:24:17.195 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:17.195 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2970918 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2971338 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2971338 /var/tmp/bperf.sock 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2971338 ']' 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:17.453 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:17.453 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.453 [2024-07-26 12:25:10.638142] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:17.453 [2024-07-26 12:25:10.638238] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971338 ] 00:24:17.453 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:17.453 Zero copy mechanism will not be used. 00:24:17.453 EAL: No free 2048 kB hugepages reported on node 1 00:24:17.453 [2024-07-26 12:25:10.700931] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:17.711 [2024-07-26 12:25:10.818846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:17.711 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:17.711 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:17.711 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.712 12:25:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:17.970 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:17.970 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:17.970 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:17.970 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:17.970 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:17.970 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:18.536 nvme0n1 00:24:18.536 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:18.536 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:18.536 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:18.536 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:18.536 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:18.536 12:25:11 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:18.536 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:18.536 Zero copy mechanism will not be used. 00:24:18.536 Running I/O for 2 seconds... 00:24:18.536 [2024-07-26 12:25:11.754288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.536 [2024-07-26 12:25:11.754344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.536 [2024-07-26 12:25:11.754375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.536 [2024-07-26 12:25:11.765830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.536 [2024-07-26 12:25:11.765866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.536 [2024-07-26 12:25:11.765884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.536 [2024-07-26 12:25:11.777679] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.536 [2024-07-26 12:25:11.777727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.536 [2024-07-26 12:25:11.777745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.789615] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.789647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.789665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.802001] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.802034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.802051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.813966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.813999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.814016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.826198] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.826234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.826252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.837848] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.837884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.837901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.849696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.849728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.849745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.861546] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.861579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.861597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.874142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.874176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.874193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.886658] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.886690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.886708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.899442] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.899476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.899502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.911311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.911343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.911360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.924408] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.924442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.924459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.937244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.937277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.937293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.949769] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.949801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.949833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.960862] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.960895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.960911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.973378] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.973409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.973425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.985141] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.985173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.985191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.796 [2024-07-26 12:25:11.996997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.796 [2024-07-26 12:25:11.997029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.796 [2024-07-26 12:25:11.997046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:18.797 [2024-07-26 12:25:12.009190] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.797 [2024-07-26 12:25:12.009229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-26 12:25:12.009247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:18.797 [2024-07-26 12:25:12.020694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.797 [2024-07-26 12:25:12.020727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-26 12:25:12.020744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:18.797 [2024-07-26 12:25:12.032266] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.797 [2024-07-26 12:25:12.032314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-26 12:25:12.032332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:18.797 [2024-07-26 12:25:12.043548] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:18.797 [2024-07-26 12:25:12.043580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:18.797 [2024-07-26 12:25:12.043597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.055071] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.055117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.055135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.066854] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.066901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.066918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.078128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.078161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.078178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.089962] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.089995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.090013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.101213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.101246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.101263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.112533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.112565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.112582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.124727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.124782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.124802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.135446] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.057 [2024-07-26 12:25:12.135494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.057 [2024-07-26 12:25:12.135512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.057 [2024-07-26 12:25:12.146983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.147015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.147033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.157762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.157794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.157811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.169103] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.169151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.169168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.180007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.180039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.180056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.191295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.191328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.191346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.202490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.202536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.202562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.214545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.214577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.214594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.225883] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.225915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.225933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.237343] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.237375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.237392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.248671] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.248705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.248721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.260421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.260468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.260485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.272458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.272492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.272510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.284766] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.284798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.284815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.296251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.296284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.296301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.058 [2024-07-26 12:25:12.308086] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.058 [2024-07-26 12:25:12.308118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.058 [2024-07-26 12:25:12.308136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.319375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.319409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.319426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.331016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.331048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.331074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.343213] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.343246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.343264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.354549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.354594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.354611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.366038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.366077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.366096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.377465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.377511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.377528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.388445] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.388477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.388493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.400109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.400142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.400167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.411677] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.411725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.411742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.424501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.424537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.424556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.437814] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.437850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.437870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.451109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.451142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.451159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.464251] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.464283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.464301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.478139] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.478171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.478188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.490834] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.490874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.319 [2024-07-26 12:25:12.490893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.319 [2024-07-26 12:25:12.502696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.319 [2024-07-26 12:25:12.502732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.320 [2024-07-26 12:25:12.502752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.320 [2024-07-26 12:25:12.515712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.320 [2024-07-26 12:25:12.515754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.320 [2024-07-26 12:25:12.515775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.320 [2024-07-26 12:25:12.528479] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.320 [2024-07-26 12:25:12.528515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.320 [2024-07-26 12:25:12.528535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.320 [2024-07-26 12:25:12.540284] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.320 [2024-07-26 12:25:12.540316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.320 [2024-07-26 12:25:12.540333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.320 [2024-07-26 12:25:12.553203] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.320 [2024-07-26 12:25:12.553235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.320 [2024-07-26 12:25:12.553267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.320 [2024-07-26 12:25:12.564929] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.320 [2024-07-26 12:25:12.564965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.320 [2024-07-26 12:25:12.564985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.577686] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.577722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.577742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.589923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.589959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.589978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.602038] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.602084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.602120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.614673] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.614710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.614730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.628545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.628581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.628602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.640470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.640506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.640526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.653464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.653502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.653522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.666053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.666098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.666134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.679370] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.679417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.679438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.692286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.692318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.692335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.705088] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.705139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.705156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.716530] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.716567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.581 [2024-07-26 12:25:12.716586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.581 [2024-07-26 12:25:12.729721] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.581 [2024-07-26 12:25:12.729757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.729782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.742329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.742377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.742395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.755252] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.755285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.755302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.767613] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.767650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.767670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.780817] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.780854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.780875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.793522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.793558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.793578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.806584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.806621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.806640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.819508] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.819544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.819564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.582 [2024-07-26 12:25:12.832007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.582 [2024-07-26 12:25:12.832043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.582 [2024-07-26 12:25:12.832071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.844313] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.844373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.844393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.856830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.856866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.856886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.870070] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.870121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.870140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.882668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.882706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.882726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.895787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.895823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.895842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.909187] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.909220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.909237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.922449] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.922485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.922505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.935594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.935630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.935649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.948903] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.948939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.948965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.961696] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.961731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.961752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.975261] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.975292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.975310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:12.989326] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:12.989375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:12.989397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.001584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.001621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.001641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.014794] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.014830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.014850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.028711] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.028748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.028773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.041835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.041872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.041892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.055841] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.055878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.055897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.068523] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.068564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.068585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:19.842 [2024-07-26 12:25:13.081306] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.842 [2024-07-26 12:25:13.081369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.842 [2024-07-26 12:25:13.081387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:19.843 [2024-07-26 12:25:13.094026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:19.843 [2024-07-26 12:25:13.094071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:19.843 [2024-07-26 12:25:13.094109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.106892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.106929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.106948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.120522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.120558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.120578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.132500] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.132537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.132556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.145496] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.145533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.145552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.158778] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.158814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.158834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.172113] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.172149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.172166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.184855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.184891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.184911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.198923] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.198961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.198980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.212454] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.212490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.212510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.224502] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.224539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.224558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.238032] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.238077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.238115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.251971] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.252007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.252026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.265763] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.265798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.265818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.278810] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.278845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.278865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.291966] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.292003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.292029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.304017] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.304053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.304084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.317317] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.317364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.317381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.329503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.329539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.329559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.342215] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.342261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.342279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.105 [2024-07-26 12:25:13.355264] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.105 [2024-07-26 12:25:13.355297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.105 [2024-07-26 12:25:13.355314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.366 [2024-07-26 12:25:13.367308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.366 [2024-07-26 12:25:13.367357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.366 [2024-07-26 12:25:13.367374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.366 [2024-07-26 12:25:13.379240] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.366 [2024-07-26 12:25:13.379273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.379291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.391983] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.392019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.392038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.404865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.404901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.404920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.417798] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.417831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.417849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.431043] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.431088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.431123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.444120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.444154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.444173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.456260] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.456294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.456311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.468617] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.468650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.468667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.480489] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.480521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.480538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.492765] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.492798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.492814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.503911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.503943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.503969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.516117] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.516149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.516167] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.527611] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.527644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.527661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.539843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.539875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.539892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.551855] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.551888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.551906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.564080] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.564129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.564147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.575827] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.575860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.575877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.587325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.587374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.587391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.598891] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.598924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.598942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.367 [2024-07-26 12:25:13.611109] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.367 [2024-07-26 12:25:13.611149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.367 [2024-07-26 12:25:13.611168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.623216] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.623248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.623265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.635351] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.635386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.635404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.646257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.646289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.646305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.658128] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.658161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.658178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.671016] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.671049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.671088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.681947] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.681979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.681996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.694559] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.694591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.694608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.706101] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.706148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.706165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.717719] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.717753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.717771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.728829] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.728861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.728878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:20.626 [2024-07-26 12:25:13.741227] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d2e290) 00:24:20.626 [2024-07-26 12:25:13.741261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:20.626 [2024-07-26 12:25:13.741294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:20.626 00:24:20.626 Latency(us) 00:24:20.626 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.626 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:24:20.626 nvme0n1 : 2.00 2507.41 313.43 0.00 0.00 6374.18 4805.97 14563.56 00:24:20.626 =================================================================================================================== 00:24:20.626 Total : 2507.41 313.43 0.00 0.00 6374.18 4805.97 14563.56 00:24:20.626 0 00:24:20.626 12:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:20.626 12:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:20.626 12:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:20.626 12:25:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:20.626 | .driver_specific 00:24:20.626 | .nvme_error 00:24:20.626 | .status_code 00:24:20.626 | .command_transient_transport_error' 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 162 > 0 )) 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2971338 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2971338 ']' 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2971338 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2971338 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2971338' 00:24:20.885 killing process with pid 2971338 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2971338 00:24:20.885 Received shutdown signal, test time was about 2.000000 seconds 00:24:20.885 00:24:20.885 Latency(us) 00:24:20.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:20.885 =================================================================================================================== 00:24:20.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:20.885 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2971338 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2971861 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2971861 /var/tmp/bperf.sock 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2971861 ']' 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:21.144 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:21.144 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.144 [2024-07-26 12:25:14.342392] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:21.144 [2024-07-26 12:25:14.342491] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2971861 ] 00:24:21.144 EAL: No free 2048 kB hugepages reported on node 1 00:24:21.402 [2024-07-26 12:25:14.404589] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:21.402 [2024-07-26 12:25:14.519509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:21.402 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:21.402 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:21.402 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.402 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:21.969 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:21.969 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:21.969 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:21.969 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:21.969 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:21.969 12:25:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:22.228 nvme0n1 00:24:22.228 12:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:24:22.228 12:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:22.228 12:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:22.228 12:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:22.228 12:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:22.228 12:25:15 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:22.228 Running I/O for 2 seconds... 00:24:22.228 [2024-07-26 12:25:15.479310] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190edd58 00:24:22.228 [2024-07-26 12:25:15.480510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24914 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.228 [2024-07-26 12:25:15.480553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.491734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fa3a0 00:24:22.486 [2024-07-26 12:25:15.492847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:7745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.486 [2024-07-26 12:25:15.492892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.506141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fd640 00:24:22.486 [2024-07-26 12:25:15.507431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:6579 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.486 [2024-07-26 12:25:15.507463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.519406] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190eee38 00:24:22.486 [2024-07-26 12:25:15.520829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:17460 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.486 [2024-07-26 12:25:15.520874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.531426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e49b0 00:24:22.486 [2024-07-26 12:25:15.532845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:11533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.486 [2024-07-26 12:25:15.532873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.543423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e6b70 00:24:22.486 [2024-07-26 12:25:15.544354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:361 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.486 [2024-07-26 12:25:15.544391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.556339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f1430 00:24:22.486 [2024-07-26 12:25:15.557075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:15920 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.486 [2024-07-26 12:25:15.557104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:22.486 [2024-07-26 12:25:15.570893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fac10 00:24:22.487 [2024-07-26 12:25:15.572629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:229 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.572680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.584171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f4b08 00:24:22.487 [2024-07-26 12:25:15.586096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:4855 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.586126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.595860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190edd58 00:24:22.487 [2024-07-26 12:25:15.597299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:5588 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.597330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.608529] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f31b8 00:24:22.487 [2024-07-26 12:25:15.609952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.609997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.621212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fd208 00:24:22.487 [2024-07-26 12:25:15.622628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20465 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.622656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.634287] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ef6a8 00:24:22.487 [2024-07-26 12:25:15.635883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:16048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.635911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.646248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f57b0 00:24:22.487 [2024-07-26 12:25:15.647817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:22269 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.647845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.658085] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e23b8 00:24:22.487 [2024-07-26 12:25:15.659171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:17405 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.659219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.670733] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e6300 00:24:22.487 [2024-07-26 12:25:15.671795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:2346 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.671841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.685022] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f46d0 00:24:22.487 [2024-07-26 12:25:15.686769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:23175 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.686812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.698300] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e38d0 00:24:22.487 [2024-07-26 12:25:15.700265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:23696 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.700292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.710086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ea248 00:24:22.487 [2024-07-26 12:25:15.711499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.711543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.721561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f5be8 00:24:22.487 [2024-07-26 12:25:15.723693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:1941 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.723724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:22.487 [2024-07-26 12:25:15.733476] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e01f8 00:24:22.487 [2024-07-26 12:25:15.734627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:17787 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.487 [2024-07-26 12:25:15.734672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.747456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fcdd0 00:24:22.746 [2024-07-26 12:25:15.748527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:12391 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.748572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.759512] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ef270 00:24:22.746 [2024-07-26 12:25:15.760568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:13337 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.760612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.772848] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ff3c8 00:24:22.746 [2024-07-26 12:25:15.774067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:3940 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.774125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.786072] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fe720 00:24:22.746 [2024-07-26 12:25:15.787476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:14651 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.787504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.797971] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f96f8 00:24:22.746 [2024-07-26 12:25:15.798911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:14479 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.798955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.810833] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e0630 00:24:22.746 [2024-07-26 12:25:15.811569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:15438 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.811605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.825365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190efae0 00:24:22.746 [2024-07-26 12:25:15.827128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2044 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.827168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.838584] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e3498 00:24:22.746 [2024-07-26 12:25:15.840502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:13353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.840533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.851919] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f3e60 00:24:22.746 [2024-07-26 12:25:15.854003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:9539 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.854034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.860979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190dece0 00:24:22.746 [2024-07-26 12:25:15.861887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.861915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.872916] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fd208 00:24:22.746 [2024-07-26 12:25:15.873806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:16879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.746 [2024-07-26 12:25:15.873838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:22.746 [2024-07-26 12:25:15.887070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fc998 00:24:22.747 [2024-07-26 12:25:15.888178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:4948 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.888210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.900251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fda78 00:24:22.747 [2024-07-26 12:25:15.901530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.901559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.912157] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fa3a0 00:24:22.747 [2024-07-26 12:25:15.913398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:3802 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.913425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.925543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f9f68 00:24:22.747 [2024-07-26 12:25:15.926980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:24833 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.927008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.937492] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fac10 00:24:22.747 [2024-07-26 12:25:15.938394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:121 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.938438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.950449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e9168 00:24:22.747 [2024-07-26 12:25:15.951186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.951215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.963670] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f81e0 00:24:22.747 [2024-07-26 12:25:15.964601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:22576 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.964633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.977013] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ddc00 00:24:22.747 [2024-07-26 12:25:15.978135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:3622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.978164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:22.747 [2024-07-26 12:25:15.991558] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ec840 00:24:22.747 [2024-07-26 12:25:15.994021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:2057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:22.747 [2024-07-26 12:25:15.994056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.000968] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f92c0 00:24:23.007 [2024-07-26 12:25:16.001905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:19400 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.001935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.013208] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f7538 00:24:23.007 [2024-07-26 12:25:16.014111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.014141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.026421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190de470 00:24:23.007 [2024-07-26 12:25:16.027493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:3457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.027521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.039747] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4578 00:24:23.007 [2024-07-26 12:25:16.040981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:22308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.041008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.053879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ebfd0 00:24:23.007 [2024-07-26 12:25:16.055323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:12991 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.055350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.067086] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f20d8 00:24:23.007 [2024-07-26 12:25:16.068659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:20513 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.068687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.077484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190feb58 00:24:23.007 [2024-07-26 12:25:16.078382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:101 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.007 [2024-07-26 12:25:16.078409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.007 [2024-07-26 12:25:16.090143] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e5a90 00:24:23.008 [2024-07-26 12:25:16.091036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:22937 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.091086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.102819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190dece0 00:24:23.008 [2024-07-26 12:25:16.103716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:3643 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.103759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.114563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f1ca0 00:24:23.008 [2024-07-26 12:25:16.115439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:476 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.115484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.128768] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190de038 00:24:23.008 [2024-07-26 12:25:16.129876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:3927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.129905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.141421] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fef90 00:24:23.008 [2024-07-26 12:25:16.142512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:24552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.142541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.154084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f9b30 00:24:23.008 [2024-07-26 12:25:16.155200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:3480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.155229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.166753] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fac10 00:24:23.008 [2024-07-26 12:25:16.167828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:21074 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.167873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.178513] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4de8 00:24:23.008 [2024-07-26 12:25:16.179570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:8202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.179599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.191819] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f0bc0 00:24:23.008 [2024-07-26 12:25:16.193053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:24895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.193102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.205922] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ee5c8 00:24:23.008 [2024-07-26 12:25:16.207357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:16480 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.207402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.218662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190dfdc0 00:24:23.008 [2024-07-26 12:25:16.220093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:22348 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.220140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.231408] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f4b08 00:24:23.008 [2024-07-26 12:25:16.232865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9056 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.232894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.244176] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e5ec8 00:24:23.008 [2024-07-26 12:25:16.245768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:11648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.245800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.008 [2024-07-26 12:25:16.256097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ecc78 00:24:23.008 [2024-07-26 12:25:16.257361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:5202 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.008 [2024-07-26 12:25:16.257392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.267274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f4f40 00:24:23.268 [2024-07-26 12:25:16.268158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:21055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.268185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.280537] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fef90 00:24:23.268 [2024-07-26 12:25:16.281584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.281611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.293881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e88f8 00:24:23.268 [2024-07-26 12:25:16.295086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:1848 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.295116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.307133] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e84c0 00:24:23.268 [2024-07-26 12:25:16.308502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:16214 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.308530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.318988] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4140 00:24:23.268 [2024-07-26 12:25:16.319879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.319912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.331395] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fe720 00:24:23.268 [2024-07-26 12:25:16.332309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:6521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.332338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.344171] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f4b08 00:24:23.268 [2024-07-26 12:25:16.345043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:2472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.345094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.356862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190dfdc0 00:24:23.268 [2024-07-26 12:25:16.357731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:1417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.357774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.369875] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f2d80 00:24:23.268 [2024-07-26 12:25:16.370598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:14946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.370626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.384527] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ea680 00:24:23.268 [2024-07-26 12:25:16.386264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:18417 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.386292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.397807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f6020 00:24:23.268 [2024-07-26 12:25:16.399685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:3774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.399713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.411093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e6b70 00:24:23.268 [2024-07-26 12:25:16.413189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:7992 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.413224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.420139] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f5378 00:24:23.268 [2024-07-26 12:25:16.420976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:22257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.421008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.432941] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f8618 00:24:23.268 [2024-07-26 12:25:16.433860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13435 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.433904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.445920] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fc998 00:24:23.268 [2024-07-26 12:25:16.446651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:283 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.446684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.459333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fef90 00:24:23.268 [2024-07-26 12:25:16.460199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:15067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.460229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.472673] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f57b0 00:24:23.268 [2024-07-26 12:25:16.473743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:11320 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.268 [2024-07-26 12:25:16.473779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.268 [2024-07-26 12:25:16.487309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ef6a8 00:24:23.269 [2024-07-26 12:25:16.489436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:25067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.269 [2024-07-26 12:25:16.489474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.269 [2024-07-26 12:25:16.496248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e5a90 00:24:23.269 [2024-07-26 12:25:16.497154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:8719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.269 [2024-07-26 12:25:16.497183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.269 [2024-07-26 12:25:16.509899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e23b8 00:24:23.269 [2024-07-26 12:25:16.510972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:6945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.269 [2024-07-26 12:25:16.511006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.522017] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4140 00:24:23.529 [2024-07-26 12:25:16.523079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:19488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-07-26 12:25:16.523130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.536192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f7538 00:24:23.529 [2024-07-26 12:25:16.537428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:4676 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-07-26 12:25:16.537472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.548891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f8a50 00:24:23.529 [2024-07-26 12:25:16.550140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:17304 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-07-26 12:25:16.550186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.561648] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f2d80 00:24:23.529 [2024-07-26 12:25:16.562893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:24803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-07-26 12:25:16.562925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.574271] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190edd58 00:24:23.529 [2024-07-26 12:25:16.575503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:19519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-07-26 12:25:16.575547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.586905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f6890 00:24:23.529 [2024-07-26 12:25:16.588172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:1211 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.529 [2024-07-26 12:25:16.588218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.529 [2024-07-26 12:25:16.599665] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ed4e8 00:24:23.529 [2024-07-26 12:25:16.600914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:16998 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.600943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.612327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190de038 00:24:23.530 [2024-07-26 12:25:16.613559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:22829 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.613587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.625464] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fc128 00:24:23.530 [2024-07-26 12:25:16.626859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:2012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.626887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.637433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f1ca0 00:24:23.530 [2024-07-26 12:25:16.638821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:22449 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.638848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.649290] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f4298 00:24:23.530 [2024-07-26 12:25:16.650168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:4057 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.650202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.661787] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190eaab8 00:24:23.530 [2024-07-26 12:25:16.662666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.662696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.676050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fd640 00:24:23.530 [2024-07-26 12:25:16.677578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:20976 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.677621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.689324] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f2948 00:24:23.530 [2024-07-26 12:25:16.691071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:23226 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.691099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.701197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e9168 00:24:23.530 [2024-07-26 12:25:16.702417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:3071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.702444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.714109] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e5a90 00:24:23.530 [2024-07-26 12:25:16.715197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:8826 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.715227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.725899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190df988 00:24:23.530 [2024-07-26 12:25:16.727151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:15138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.727179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.737905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f57b0 00:24:23.530 [2024-07-26 12:25:16.738796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:7323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.738823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.750924] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e6300 00:24:23.530 [2024-07-26 12:25:16.751970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:7065 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.752018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.764570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ddc00 00:24:23.530 [2024-07-26 12:25:16.765818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:14467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.765847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:23.530 [2024-07-26 12:25:16.776692] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e0ea0 00:24:23.530 [2024-07-26 12:25:16.777923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:16328 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.530 [2024-07-26 12:25:16.777953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.790091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e1f80 00:24:23.791 [2024-07-26 12:25:16.791434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:23879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.791462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.803340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190eaef0 00:24:23.791 [2024-07-26 12:25:16.804902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:21015 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.804930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.815177] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ed920 00:24:23.791 [2024-07-26 12:25:16.816253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2850 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.816282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.828149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f2510 00:24:23.791 [2024-07-26 12:25:16.829023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:13803 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.829054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.841052] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f31b8 00:24:23.791 [2024-07-26 12:25:16.842301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:16352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.842331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.853754] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e1710 00:24:23.791 [2024-07-26 12:25:16.854985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:24471 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.855029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.866506] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e7818 00:24:23.791 [2024-07-26 12:25:16.867723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:9811 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.867767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.879092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190eee38 00:24:23.791 [2024-07-26 12:25:16.880385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.880414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.891816] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f8618 00:24:23.791 [2024-07-26 12:25:16.893073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.893116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.904583] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f57b0 00:24:23.791 [2024-07-26 12:25:16.905783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:22248 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.791 [2024-07-26 12:25:16.905828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.791 [2024-07-26 12:25:16.918850] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190dfdc0 00:24:23.791 [2024-07-26 12:25:16.920780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:19010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.920809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:16.932192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f0350 00:24:23.792 [2024-07-26 12:25:16.934277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:1847 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.934305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:109 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:16.941228] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fdeb0 00:24:23.792 [2024-07-26 12:25:16.942119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:21340 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.942148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:16.953331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f35f0 00:24:23.792 [2024-07-26 12:25:16.954271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:25484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.954299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:16.966661] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ea680 00:24:23.792 [2024-07-26 12:25:16.967697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23881 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.967725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:16.980815] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f0788 00:24:23.792 [2024-07-26 12:25:16.982086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:1246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.982139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:16.993547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f92c0 00:24:23.792 [2024-07-26 12:25:16.994783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:21622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:16.994826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:17.006231] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4140 00:24:23.792 [2024-07-26 12:25:17.007518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:23472 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:17.007548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:17.018253] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f7da8 00:24:23.792 [2024-07-26 12:25:17.019515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:1525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:17.019543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:23.792 [2024-07-26 12:25:17.032499] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190df118 00:24:23.792 [2024-07-26 12:25:17.033918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:13568 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:23.792 [2024-07-26 12:25:17.033947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.045629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f6cc8 00:24:24.053 [2024-07-26 12:25:17.047269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:18412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.047299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.057736] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ef6a8 00:24:24.053 [2024-07-26 12:25:17.059288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:22069 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.059315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.069563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f5be8 00:24:24.053 [2024-07-26 12:25:17.070593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:15617 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.070621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.082447] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f81e0 00:24:24.053 [2024-07-26 12:25:17.083333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:13332 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.083363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.095703] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4de8 00:24:24.053 [2024-07-26 12:25:17.096741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:3793 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.096774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.107748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f9b30 00:24:24.053 [2024-07-26 12:25:17.109677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:16554 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.109710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.053 [2024-07-26 12:25:17.118655] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e2c28 00:24:24.053 [2024-07-26 12:25:17.119516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:18264 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.053 [2024-07-26 12:25:17.119543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.132746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f2510 00:24:24.054 [2024-07-26 12:25:17.133820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5186 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.133851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.145877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190eaef0 00:24:24.054 [2024-07-26 12:25:17.147091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5481 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.147118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.158690] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e4de8 00:24:24.054 [2024-07-26 12:25:17.159821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.159850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.170777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fc560 00:24:24.054 [2024-07-26 12:25:17.171922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:10263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.171959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.182807] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190efae0 00:24:24.054 [2024-07-26 12:25:17.183925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:5984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.183953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.194514] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f7538 00:24:24.054 [2024-07-26 12:25:17.195628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:4392 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.195657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.206209] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ebb98 00:24:24.054 [2024-07-26 12:25:17.207302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:17682 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.207331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.217862] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e99d8 00:24:24.054 [2024-07-26 12:25:17.218982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:22152 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.219011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.229551] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190eff18 00:24:24.054 [2024-07-26 12:25:17.230665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:17656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.230693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.240495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ed0b0 00:24:24.054 [2024-07-26 12:25:17.241512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:23217 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.241540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.253312] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f6458 00:24:24.054 [2024-07-26 12:25:17.254632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:16788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.254661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.265226] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e5a90 00:24:24.054 [2024-07-26 12:25:17.266836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:15398 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.266877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.277660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fb048 00:24:24.054 [2024-07-26 12:25:17.278922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:7233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.278953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.289412] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fb8b8 00:24:24.054 [2024-07-26 12:25:17.290706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.290737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.054 [2024-07-26 12:25:17.301339] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e84c0 00:24:24.054 [2024-07-26 12:25:17.302724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:5925 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.054 [2024-07-26 12:25:17.302773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.311048] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e38d0 00:24:24.313 [2024-07-26 12:25:17.311855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:13982 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.311884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.324212] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f8e88 00:24:24.313 [2024-07-26 12:25:17.325548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:22048 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.325575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.336269] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f8a50 00:24:24.313 [2024-07-26 12:25:17.337723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.337751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.346979] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ed4e8 00:24:24.313 [2024-07-26 12:25:17.348094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:7874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.348123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.358508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e73e0 00:24:24.313 [2024-07-26 12:25:17.359544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:15818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.359574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.369254] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ef6a8 00:24:24.313 [2024-07-26 12:25:17.371213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:24368 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.371243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:24.313 [2024-07-26 12:25:17.380279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f1430 00:24:24.313 [2024-07-26 12:25:17.381064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:2620 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.313 [2024-07-26 12:25:17.381093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.392126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190de038 00:24:24.314 [2024-07-26 12:25:17.393056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:6173 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.393118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.403112] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e9e10 00:24:24.314 [2024-07-26 12:25:17.403979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:7608 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.404007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.415969] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f0350 00:24:24.314 [2024-07-26 12:25:17.417054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:5558 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.417091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.427841] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190fda78 00:24:24.314 [2024-07-26 12:25:17.429043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:21265 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.429079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.438806] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f0788 00:24:24.314 [2024-07-26 12:25:17.439955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:17300 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.439983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.449467] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190e3d08 00:24:24.314 [2024-07-26 12:25:17.450277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:1185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.450306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.461152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190ea248 00:24:24.314 [2024-07-26 12:25:17.461745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:15275 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.461774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.314 [2024-07-26 12:25:17.473091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2525f30) with pdu=0x2000190f9f68 00:24:24.314 [2024-07-26 12:25:17.473947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:2738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:24.314 [2024-07-26 12:25:17.473974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:24.314 00:24:24.314 Latency(us) 00:24:24.314 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.314 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:24:24.314 nvme0n1 : 2.01 20347.87 79.48 0.00 0.00 6279.93 2500.08 15728.64 00:24:24.314 =================================================================================================================== 00:24:24.314 Total : 20347.87 79.48 0.00 0.00 6279.93 2500.08 15728.64 00:24:24.314 0 00:24:24.314 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:24.314 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:24.314 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:24.314 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:24.314 | .driver_specific 00:24:24.314 | .nvme_error 00:24:24.314 | .status_code 00:24:24.314 | .command_transient_transport_error' 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 160 > 0 )) 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2971861 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2971861 ']' 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2971861 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2971861 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2971861' 00:24:24.574 killing process with pid 2971861 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2971861 00:24:24.574 Received shutdown signal, test time was about 2.000000 seconds 00:24:24.574 00:24:24.574 Latency(us) 00:24:24.574 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:24.574 =================================================================================================================== 00:24:24.574 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:24.574 12:25:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2971861 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=2972275 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 2972275 /var/tmp/bperf.sock 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@831 -- # '[' -z 2972275 ']' 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:24:24.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:24.832 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.091 [2024-07-26 12:25:18.099844] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:25.091 [2024-07-26 12:25:18.099930] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2972275 ] 00:24:25.091 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:25.091 Zero copy mechanism will not be used. 00:24:25.091 EAL: No free 2048 kB hugepages reported on node 1 00:24:25.091 [2024-07-26 12:25:18.161487] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:25.091 [2024-07-26 12:25:18.276428] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:25.349 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:25.349 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # return 0 00:24:25.349 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.349 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:24:25.606 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:24:25.606 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.607 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.607 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.607 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.607 12:25:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:24:25.864 nvme0n1 00:24:25.864 12:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:24:25.864 12:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:25.864 12:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:25.864 12:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:25.864 12:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:24:25.864 12:25:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:24:26.122 I/O size of 131072 is greater than zero copy threshold (65536). 00:24:26.122 Zero copy mechanism will not be used. 00:24:26.122 Running I/O for 2 seconds... 00:24:26.122 [2024-07-26 12:25:19.157694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.158056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.158115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.168779] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.169143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.169188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.182219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.182602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.182636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.195365] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.195748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.195781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.208790] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.209209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.209237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.222383] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.222737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.222784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.235210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.235593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.235646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.248999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.249372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.249416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.262494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.262841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.262870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.274530] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.274870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.274914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.287501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.287844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.287873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.298315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.298661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.298689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.311436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.311774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.311803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.324267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.324641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.324691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.337373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.337720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.337748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.349098] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.349471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.349524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.361877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.122 [2024-07-26 12:25:19.362244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.122 [2024-07-26 12:25:19.362287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.122 [2024-07-26 12:25:19.374039] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.123 [2024-07-26 12:25:19.374424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.123 [2024-07-26 12:25:19.374457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.387502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.387830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.387858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.399632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.399973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.400000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.412784] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.413227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.413267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.425589] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.425948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.425977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.438306] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.438662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.438691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.450956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.451322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.451355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.462741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.463110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.463163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.475683] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.476033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.476067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.487712] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.488040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.488093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.501019] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.501386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.381 [2024-07-26 12:25:19.501433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.381 [2024-07-26 12:25:19.513194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.381 [2024-07-26 12:25:19.513552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.513594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.525278] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.525641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.525695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.537767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.538166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.538220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.550423] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.550769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.550797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.563540] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.563885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.563929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.576508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.576872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.576920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.589515] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.589851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.589879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.601843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.602217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.602259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.614818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.615187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.615230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.382 [2024-07-26 12:25:19.627509] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.382 [2024-07-26 12:25:19.627879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.382 [2024-07-26 12:25:19.627921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.646 [2024-07-26 12:25:19.640051] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.646 [2024-07-26 12:25:19.640414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.646 [2024-07-26 12:25:19.640461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.646 [2024-07-26 12:25:19.652164] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.646 [2024-07-26 12:25:19.652549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.646 [2024-07-26 12:25:19.652592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.646 [2024-07-26 12:25:19.664839] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.646 [2024-07-26 12:25:19.665249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.646 [2024-07-26 12:25:19.665279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.646 [2024-07-26 12:25:19.677196] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.646 [2024-07-26 12:25:19.677544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.646 [2024-07-26 12:25:19.677572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.646 [2024-07-26 12:25:19.690428] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.646 [2024-07-26 12:25:19.690785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.646 [2024-07-26 12:25:19.690830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.646 [2024-07-26 12:25:19.702720] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.646 [2024-07-26 12:25:19.703070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.703115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.715640] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.716000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.716043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.728374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.728721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.728750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.741202] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.741450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.741478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.753749] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.753994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.754023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.766285] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.766633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.766661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.779192] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.779531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.779575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.791490] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.791855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.791912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.804217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.804576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.804621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.816526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.816892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.816950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.829333] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.829690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.829722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.841469] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.841851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.841888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.854698] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.855113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.855144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.867632] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.867972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.868001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.880131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.880377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.880406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.647 [2024-07-26 12:25:19.892573] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.647 [2024-07-26 12:25:19.892914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.647 [2024-07-26 12:25:19.892943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.907 [2024-07-26 12:25:19.905781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.907 [2024-07-26 12:25:19.906144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.907 [2024-07-26 12:25:19.906193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.907 [2024-07-26 12:25:19.917650] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.907 [2024-07-26 12:25:19.918017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.907 [2024-07-26 12:25:19.918047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.907 [2024-07-26 12:25:19.931448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.907 [2024-07-26 12:25:19.931809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.907 [2024-07-26 12:25:19.931838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.907 [2024-07-26 12:25:19.944126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.907 [2024-07-26 12:25:19.944473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.907 [2024-07-26 12:25:19.944504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.907 [2024-07-26 12:25:19.956685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.907 [2024-07-26 12:25:19.957034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.907 [2024-07-26 12:25:19.957084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.907 [2024-07-26 12:25:19.969373] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.907 [2024-07-26 12:25:19.969708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:19.969751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:19.982195] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:19.982528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:19.982556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:19.994267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:19.994612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:19.994640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.006739] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.006910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.006946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.019194] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.019550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.019581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.030706] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.031066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.031096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.042717] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.043094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.043144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.054701] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.054988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.055017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.067491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.067811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.067840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.079224] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.079554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.079581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.092094] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.092446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.092474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.104651] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.105020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.105050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.117070] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.117432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.117474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.130337] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.130698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.130741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.143134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.143470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.143498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:26.908 [2024-07-26 12:25:20.155761] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:26.908 [2024-07-26 12:25:20.156157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:26.908 [2024-07-26 12:25:20.156192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.168122] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.168491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.168534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.180877] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.181255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.181299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.193660] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.194003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.194030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.207458] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.207802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.207831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.219828] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.220177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.220206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.232507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.232860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.232888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.245031] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.245279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.245327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.257040] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.257373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.257399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.269914] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.270302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.270346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.282959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.283341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.283372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.295734] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.296087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.296115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.308381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.308723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.308750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.321132] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.321471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.321498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.333145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.333493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.333520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.345267] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.345614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.345642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.358093] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.358447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.358492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.370871] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.371285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.371334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.383246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.383599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.383627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.395375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.395729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.395757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.408606] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.169 [2024-07-26 12:25:20.408958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.169 [2024-07-26 12:25:20.408986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.169 [2024-07-26 12:25:20.421691] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.422082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.422113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.434197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.434551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.434594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.445950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.446325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.446354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.459318] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.459654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.459682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.472274] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.472523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.472551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.485433] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.485778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.485805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.498152] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.498510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.498552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.511084] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.511442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.511470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.523364] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.523701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.523728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.535930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.536232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.536259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.548843] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.549196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.549225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.561999] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.562366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.562414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.575714] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.576045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.576096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.588737] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.589121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.589148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.601327] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.601575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.601618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.614518] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.614853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.614881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.627323] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.627657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.627685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.639002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.639372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.639400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.652311] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.652631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.652658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.665187] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.665537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.665565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.429 [2024-07-26 12:25:20.678080] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.429 [2024-07-26 12:25:20.678435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.429 [2024-07-26 12:25:20.678481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.690623] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.690974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:0 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.691003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.703789] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.704163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.704214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.716399] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.716746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.716774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.729149] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.729491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.729518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.741996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.742371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.742400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.755244] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.755582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.755610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.688 [2024-07-26 12:25:20.768183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.688 [2024-07-26 12:25:20.768536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.688 [2024-07-26 12:25:20.768564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.781611] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.781958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.781986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.794115] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.794465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.794510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.807484] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.807821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.807849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.820210] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.820545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.820574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.833107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.833442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.833470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.845375] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.845715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.845743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.857721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.858099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.858128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.870376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.870708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.870736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.882730] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.883071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.883100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.894645] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.895045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.895108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.907101] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.907460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.907503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.919721] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.920094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.920141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.689 [2024-07-26 12:25:20.931950] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.689 [2024-07-26 12:25:20.932335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.689 [2024-07-26 12:25:20.932366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.947 [2024-07-26 12:25:20.944237] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.947 [2024-07-26 12:25:20.944534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.947 [2024-07-26 12:25:20.944562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.947 [2024-07-26 12:25:20.956777] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.947 [2024-07-26 12:25:20.957127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.947 [2024-07-26 12:25:20.957157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.947 [2024-07-26 12:25:20.968600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.947 [2024-07-26 12:25:20.968853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.947 [2024-07-26 12:25:20.968882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.947 [2024-07-26 12:25:20.980905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.947 [2024-07-26 12:25:20.981290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.947 [2024-07-26 12:25:20.981319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.947 [2024-07-26 12:25:20.993325] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:20.993661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:20.993689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.005523] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.005860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.005888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.018868] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.019179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.019208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.030414] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.030851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.030901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.041388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.041852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.041895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.053350] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.053817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.053860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.065874] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.066407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.066435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.077578] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.078032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.078067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.090030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.090501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.090543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.101748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.102269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.102297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.114054] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.114474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.114504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.125510] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.125934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.125962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.136895] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.137309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.137353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:27.948 [2024-07-26 12:25:21.147903] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x2526270) with pdu=0x2000190fef90 00:24:27.948 [2024-07-26 12:25:21.148329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:27.948 [2024-07-26 12:25:21.148357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:27.948 00:24:27.948 Latency(us) 00:24:27.948 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:27.948 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:24:27.948 nvme0n1 : 2.01 2468.16 308.52 0.00 0.00 6467.35 2257.35 14272.28 00:24:27.948 =================================================================================================================== 00:24:27.948 Total : 2468.16 308.52 0.00 0.00 6467.35 2257.35 14272.28 00:24:27.948 0 00:24:27.948 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:24:27.948 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:24:27.948 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:24:27.948 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:24:27.948 | .driver_specific 00:24:27.948 | .nvme_error 00:24:27.948 | .status_code 00:24:27.948 | .command_transient_transport_error' 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 159 > 0 )) 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 2972275 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2972275 ']' 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2972275 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.207 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2972275 00:24:28.208 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:28.208 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:28.208 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2972275' 00:24:28.208 killing process with pid 2972275 00:24:28.208 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2972275 00:24:28.208 Received shutdown signal, test time was about 2.000000 seconds 00:24:28.208 00:24:28.208 Latency(us) 00:24:28.208 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:28.208 =================================================================================================================== 00:24:28.208 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:24:28.208 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2972275 00:24:28.466 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 2970779 00:24:28.466 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@950 -- # '[' -z 2970779 ']' 00:24:28.466 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # kill -0 2970779 00:24:28.466 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # uname 00:24:28.466 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:28.466 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2970779 00:24:28.724 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:24:28.724 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:24:28.724 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2970779' 00:24:28.724 killing process with pid 2970779 00:24:28.724 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@969 -- # kill 2970779 00:24:28.724 12:25:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@974 -- # wait 2970779 00:24:28.983 00:24:28.983 real 0m16.088s 00:24:28.983 user 0m31.489s 00:24:28.983 sys 0m4.268s 00:24:28.983 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:28.983 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:24:28.983 ************************************ 00:24:28.983 END TEST nvmf_digest_error 00:24:28.983 ************************************ 00:24:28.983 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:24:28.983 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:24:28.983 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:28.984 rmmod nvme_tcp 00:24:28.984 rmmod nvme_fabrics 00:24:28.984 rmmod nvme_keyring 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 2970779 ']' 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 2970779 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@950 -- # '[' -z 2970779 ']' 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # kill -0 2970779 00:24:28.984 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2970779) - No such process 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@977 -- # echo 'Process with pid 2970779 is not found' 00:24:28.984 Process with pid 2970779 is not found 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:28.984 12:25:22 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:30.889 12:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:30.889 00:24:30.889 real 0m36.850s 00:24:30.889 user 1m5.556s 00:24:30.889 sys 0m9.708s 00:24:30.889 12:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:30.889 12:25:24 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:24:30.889 ************************************ 00:24:30.889 END TEST nvmf_digest 00:24:30.889 ************************************ 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:31.148 ************************************ 00:24:31.148 START TEST nvmf_bdevperf 00:24:31.148 ************************************ 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:24:31.148 * Looking for test storage... 00:24:31.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:24:31.148 12:25:24 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:33.052 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:33.052 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:33.052 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.052 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:33.313 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:33.313 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:33.313 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:24:33.313 00:24:33.313 --- 10.0.0.2 ping statistics --- 00:24:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.313 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:33.313 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:33.313 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:33.313 00:24:33.313 --- 10.0.0.1 ping statistics --- 00:24:33.313 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:33.313 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2974621 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2974621 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2974621 ']' 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:33.313 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:33.313 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.313 [2024-07-26 12:25:26.533802] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:33.313 [2024-07-26 12:25:26.533900] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:33.572 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.572 [2024-07-26 12:25:26.609168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:33.572 [2024-07-26 12:25:26.727819] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:33.572 [2024-07-26 12:25:26.727883] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:33.572 [2024-07-26 12:25:26.727911] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:33.572 [2024-07-26 12:25:26.727924] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:33.572 [2024-07-26 12:25:26.727934] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:33.572 [2024-07-26 12:25:26.728320] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:33.572 [2024-07-26 12:25:26.728904] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:33.572 [2024-07-26 12:25:26.728909] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.830 [2024-07-26 12:25:26.860133] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.830 Malloc0 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:33.830 [2024-07-26 12:25:26.919728] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:33.830 { 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme$subsystem", 00:24:33.830 "trtype": "$TEST_TRANSPORT", 00:24:33.830 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "$NVMF_PORT", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:33.830 "hdgst": ${hdgst:-false}, 00:24:33.830 "ddgst": ${ddgst:-false} 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 } 00:24:33.830 EOF 00:24:33.830 )") 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:33.830 12:25:26 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:33.830 "params": { 00:24:33.830 "name": "Nvme1", 00:24:33.830 "trtype": "tcp", 00:24:33.830 "traddr": "10.0.0.2", 00:24:33.830 "adrfam": "ipv4", 00:24:33.830 "trsvcid": "4420", 00:24:33.830 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:33.830 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:33.830 "hdgst": false, 00:24:33.830 "ddgst": false 00:24:33.830 }, 00:24:33.830 "method": "bdev_nvme_attach_controller" 00:24:33.830 }' 00:24:33.830 [2024-07-26 12:25:26.964874] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:33.830 [2024-07-26 12:25:26.964958] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974767 ] 00:24:33.830 EAL: No free 2048 kB hugepages reported on node 1 00:24:33.830 [2024-07-26 12:25:27.023916] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:34.088 [2024-07-26 12:25:27.133610] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:34.347 Running I/O for 1 seconds... 00:24:35.283 00:24:35.283 Latency(us) 00:24:35.283 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:35.283 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:35.283 Verification LBA range: start 0x0 length 0x4000 00:24:35.283 Nvme1n1 : 1.02 8801.21 34.38 0.00 0.00 14483.13 2827.76 13398.47 00:24:35.283 =================================================================================================================== 00:24:35.283 Total : 8801.21 34.38 0.00 0.00 14483.13 2827.76 13398.47 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=2974912 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:24:35.541 { 00:24:35.541 "params": { 00:24:35.541 "name": "Nvme$subsystem", 00:24:35.541 "trtype": "$TEST_TRANSPORT", 00:24:35.541 "traddr": "$NVMF_FIRST_TARGET_IP", 00:24:35.541 "adrfam": "ipv4", 00:24:35.541 "trsvcid": "$NVMF_PORT", 00:24:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:24:35.541 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:24:35.541 "hdgst": ${hdgst:-false}, 00:24:35.541 "ddgst": ${ddgst:-false} 00:24:35.541 }, 00:24:35.541 "method": "bdev_nvme_attach_controller" 00:24:35.541 } 00:24:35.541 EOF 00:24:35.541 )") 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:24:35.541 12:25:28 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:24:35.541 "params": { 00:24:35.541 "name": "Nvme1", 00:24:35.541 "trtype": "tcp", 00:24:35.541 "traddr": "10.0.0.2", 00:24:35.541 "adrfam": "ipv4", 00:24:35.541 "trsvcid": "4420", 00:24:35.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:24:35.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:24:35.541 "hdgst": false, 00:24:35.541 "ddgst": false 00:24:35.541 }, 00:24:35.541 "method": "bdev_nvme_attach_controller" 00:24:35.541 }' 00:24:35.541 [2024-07-26 12:25:28.777115] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:35.541 [2024-07-26 12:25:28.777195] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2974912 ] 00:24:35.799 EAL: No free 2048 kB hugepages reported on node 1 00:24:35.799 [2024-07-26 12:25:28.836651] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:35.799 [2024-07-26 12:25:28.943009] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:36.059 Running I/O for 15 seconds... 00:24:38.594 12:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 2974621 00:24:38.594 12:25:31 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:24:38.594 [2024-07-26 12:25:31.749065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:28712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:28720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:28728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:28736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:28744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:28752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:28760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:28768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:28776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:28784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:28792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:28800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:28808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:28816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:28824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:28832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:28848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:28856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:28864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:28872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:28880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:28888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:28896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.749972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.749989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:28904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:28912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:28920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:28928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:28936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:28944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:28952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:28960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.594 [2024-07-26 12:25:31.750272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:28968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.594 [2024-07-26 12:25:31.750285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:28984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:28992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:29000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:29008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:29016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:29024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:29032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:29048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:29056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:29064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:29072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:29080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:29088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:29096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:29104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:29112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:29120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:29128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:29136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.750980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.750997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:29144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:29152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:29160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:29168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:29176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:29184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:29192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:29200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:29208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:29216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:29224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:29232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:29240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:29248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:29256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.595 [2024-07-26 12:25:31.751497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.595 [2024-07-26 12:25:31.751514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:29264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:29272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:29280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:29296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:29672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:29680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:29696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:29704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:29712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:29720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:29728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.596 [2024-07-26 12:25:31.751929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:29304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.751979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.751995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:29320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:29328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:29336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:29344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:29352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:29360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:29368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:29376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:29384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:29392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:29400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:29408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:29416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:29424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:29432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:29448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:29456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:29464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:29480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.596 [2024-07-26 12:25:31.752723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:29488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.596 [2024-07-26 12:25:31.752738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:29496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:29504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:29512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:29520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:29528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:29536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:29544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.752984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:29552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.752999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:29560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:29568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:29576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:29584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:29592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:29600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:29608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:29616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:29624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:29632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:29640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:29656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.597 [2024-07-26 12:25:31.753431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753447] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x211b830 is same with the state(5) to be set 00:24:38.597 [2024-07-26 12:25:31.753467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:24:38.597 [2024-07-26 12:25:31.753480] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:24:38.597 [2024-07-26 12:25:31.753493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:29664 len:8 PRP1 0x0 PRP2 0x0 00:24:38.597 [2024-07-26 12:25:31.753508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753584] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x211b830 was disconnected and freed. reset controller. 00:24:38.597 [2024-07-26 12:25:31.753663] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.597 [2024-07-26 12:25:31.753687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753703] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.597 [2024-07-26 12:25:31.753723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.597 [2024-07-26 12:25:31.753785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753812] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:38.597 [2024-07-26 12:25:31.753828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.597 [2024-07-26 12:25:31.753843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.597 [2024-07-26 12:25:31.757704] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.597 [2024-07-26 12:25:31.757746] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.597 [2024-07-26 12:25:31.758501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.597 [2024-07-26 12:25:31.758531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.597 [2024-07-26 12:25:31.758548] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.597 [2024-07-26 12:25:31.758806] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.597 [2024-07-26 12:25:31.759052] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.597 [2024-07-26 12:25:31.759086] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.597 [2024-07-26 12:25:31.759126] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.597 [2024-07-26 12:25:31.762731] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.597 [2024-07-26 12:25:31.771851] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.597 [2024-07-26 12:25:31.772294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.597 [2024-07-26 12:25:31.772323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.597 [2024-07-26 12:25:31.772353] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.597 [2024-07-26 12:25:31.772588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.597 [2024-07-26 12:25:31.772845] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.597 [2024-07-26 12:25:31.772868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.597 [2024-07-26 12:25:31.772883] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.597 [2024-07-26 12:25:31.776486] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.597 [2024-07-26 12:25:31.785663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.598 [2024-07-26 12:25:31.786085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.598 [2024-07-26 12:25:31.786117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.598 [2024-07-26 12:25:31.786135] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.598 [2024-07-26 12:25:31.786374] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.598 [2024-07-26 12:25:31.786637] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.598 [2024-07-26 12:25:31.786661] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.598 [2024-07-26 12:25:31.786676] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.598 [2024-07-26 12:25:31.790283] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.598 [2024-07-26 12:25:31.799573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.598 [2024-07-26 12:25:31.800022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.598 [2024-07-26 12:25:31.800053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.598 [2024-07-26 12:25:31.800082] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.598 [2024-07-26 12:25:31.800322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.598 [2024-07-26 12:25:31.800565] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.598 [2024-07-26 12:25:31.800588] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.598 [2024-07-26 12:25:31.800603] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.598 [2024-07-26 12:25:31.804189] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.598 [2024-07-26 12:25:31.813479] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.598 [2024-07-26 12:25:31.813916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.598 [2024-07-26 12:25:31.813947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.598 [2024-07-26 12:25:31.813965] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.598 [2024-07-26 12:25:31.814214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.598 [2024-07-26 12:25:31.814458] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.598 [2024-07-26 12:25:31.814481] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.598 [2024-07-26 12:25:31.814496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.598 [2024-07-26 12:25:31.818079] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.598 [2024-07-26 12:25:31.827367] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.598 [2024-07-26 12:25:31.827805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.598 [2024-07-26 12:25:31.827835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.598 [2024-07-26 12:25:31.827854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.598 [2024-07-26 12:25:31.828103] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.598 [2024-07-26 12:25:31.828345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.598 [2024-07-26 12:25:31.828368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.598 [2024-07-26 12:25:31.828383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.598 [2024-07-26 12:25:31.831965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.598 [2024-07-26 12:25:31.841261] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.598 [2024-07-26 12:25:31.841687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.598 [2024-07-26 12:25:31.841714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.598 [2024-07-26 12:25:31.841730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.598 [2024-07-26 12:25:31.841981] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.598 [2024-07-26 12:25:31.842237] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.598 [2024-07-26 12:25:31.842261] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.598 [2024-07-26 12:25:31.842276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.858 [2024-07-26 12:25:31.845857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.858 [2024-07-26 12:25:31.855202] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.858 [2024-07-26 12:25:31.855619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.858 [2024-07-26 12:25:31.855650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.858 [2024-07-26 12:25:31.855668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.858 [2024-07-26 12:25:31.855906] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.858 [2024-07-26 12:25:31.856159] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.858 [2024-07-26 12:25:31.856183] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.858 [2024-07-26 12:25:31.856198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.858 [2024-07-26 12:25:31.859772] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.858 [2024-07-26 12:25:31.869055] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.858 [2024-07-26 12:25:31.869499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.858 [2024-07-26 12:25:31.869530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.858 [2024-07-26 12:25:31.869547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.858 [2024-07-26 12:25:31.869786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.858 [2024-07-26 12:25:31.870027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.858 [2024-07-26 12:25:31.870050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.858 [2024-07-26 12:25:31.870076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.858 [2024-07-26 12:25:31.873654] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.858 [2024-07-26 12:25:31.882971] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.858 [2024-07-26 12:25:31.883428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.858 [2024-07-26 12:25:31.883459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.858 [2024-07-26 12:25:31.883483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.858 [2024-07-26 12:25:31.883722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.858 [2024-07-26 12:25:31.883965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.858 [2024-07-26 12:25:31.883987] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.858 [2024-07-26 12:25:31.884003] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.858 [2024-07-26 12:25:31.887607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.858 [2024-07-26 12:25:31.896898] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.858 [2024-07-26 12:25:31.897329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.858 [2024-07-26 12:25:31.897360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.858 [2024-07-26 12:25:31.897378] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.858 [2024-07-26 12:25:31.897616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.858 [2024-07-26 12:25:31.897858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.858 [2024-07-26 12:25:31.897881] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.858 [2024-07-26 12:25:31.897896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.858 [2024-07-26 12:25:31.901481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.858 [2024-07-26 12:25:31.910774] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.858 [2024-07-26 12:25:31.911222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.858 [2024-07-26 12:25:31.911253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.858 [2024-07-26 12:25:31.911271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.858 [2024-07-26 12:25:31.911510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.911752] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.911775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.911790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.915376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:31.924660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:31.925080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:31.925113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:31.925131] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:31.925369] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.925611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.925640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.925656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.929241] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:31.938529] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:31.938970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:31.939000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:31.939017] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:31.939268] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.939511] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.939534] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.939550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.943133] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:31.952426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:31.952868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:31.952898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:31.952916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:31.953165] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.953408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.953431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.953446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.957023] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:31.966311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:31.966737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:31.966768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:31.966785] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:31.967024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.967276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.967299] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.967314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.970888] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:31.980182] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:31.980591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:31.980622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:31.980640] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:31.980878] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.981134] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.981158] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.981174] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.984748] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:31.994044] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:31.994494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:31.994524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:31.994542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:31.994781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:31.995023] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:31.995046] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:31.995071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:31.998652] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:32.007935] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:32.008347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:32.008378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:32.008396] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:32.008634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:32.008906] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:32.008933] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:32.008949] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:32.013154] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:32.022685] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:32.023134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:32.023178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:32.023209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.859 [2024-07-26 12:25:32.023528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.859 [2024-07-26 12:25:32.023839] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.859 [2024-07-26 12:25:32.023874] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.859 [2024-07-26 12:25:32.023901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.859 [2024-07-26 12:25:32.028082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.859 [2024-07-26 12:25:32.036547] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.859 [2024-07-26 12:25:32.036982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.859 [2024-07-26 12:25:32.037014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.859 [2024-07-26 12:25:32.037032] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.860 [2024-07-26 12:25:32.037281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.860 [2024-07-26 12:25:32.037524] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.860 [2024-07-26 12:25:32.037547] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.860 [2024-07-26 12:25:32.037563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.860 [2024-07-26 12:25:32.041156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.860 [2024-07-26 12:25:32.050467] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.860 [2024-07-26 12:25:32.050922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.860 [2024-07-26 12:25:32.050955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.860 [2024-07-26 12:25:32.050974] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.860 [2024-07-26 12:25:32.051224] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.860 [2024-07-26 12:25:32.051468] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.860 [2024-07-26 12:25:32.051491] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.860 [2024-07-26 12:25:32.051507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.860 [2024-07-26 12:25:32.055092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.860 [2024-07-26 12:25:32.064389] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.860 [2024-07-26 12:25:32.064834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.860 [2024-07-26 12:25:32.064865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.860 [2024-07-26 12:25:32.064884] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.860 [2024-07-26 12:25:32.065133] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.860 [2024-07-26 12:25:32.065377] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.860 [2024-07-26 12:25:32.065400] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.860 [2024-07-26 12:25:32.065422] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.860 [2024-07-26 12:25:32.069004] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.860 [2024-07-26 12:25:32.078320] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.860 [2024-07-26 12:25:32.078764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.860 [2024-07-26 12:25:32.078794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.860 [2024-07-26 12:25:32.078813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.860 [2024-07-26 12:25:32.079051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.860 [2024-07-26 12:25:32.079304] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.860 [2024-07-26 12:25:32.079327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.860 [2024-07-26 12:25:32.079343] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.860 [2024-07-26 12:25:32.082921] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.860 [2024-07-26 12:25:32.092295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.860 [2024-07-26 12:25:32.092718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.860 [2024-07-26 12:25:32.092750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.860 [2024-07-26 12:25:32.092769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.860 [2024-07-26 12:25:32.093008] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.860 [2024-07-26 12:25:32.093262] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.860 [2024-07-26 12:25:32.093287] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.860 [2024-07-26 12:25:32.093302] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:38.860 [2024-07-26 12:25:32.096881] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:38.860 [2024-07-26 12:25:32.106199] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:38.860 [2024-07-26 12:25:32.106615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:38.860 [2024-07-26 12:25:32.106647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:38.860 [2024-07-26 12:25:32.106665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:38.860 [2024-07-26 12:25:32.106909] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:38.860 [2024-07-26 12:25:32.107166] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:38.860 [2024-07-26 12:25:32.107189] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:38.860 [2024-07-26 12:25:32.107204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.110784] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.120088] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.120513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.120544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.120562] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.120800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.121043] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.121077] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.121094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.124672] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.133965] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.134380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.134410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.134428] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.134666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.134908] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.134931] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.134946] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.138537] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.147837] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.148270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.148301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.148319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.148558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.148800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.148823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.148838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.152427] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.161723] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.162156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.162188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.162206] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.162450] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.162693] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.162716] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.162731] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.166320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.175650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.176068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.176100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.176118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.176357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.176599] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.176622] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.176637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.180224] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.189535] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.189973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.190004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.190022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.190271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.190515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.190538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.190553] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.194135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.203418] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.203848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.203879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.203897] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.204146] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.204390] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.204413] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.204434] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.208010] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.119 [2024-07-26 12:25:32.217298] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.119 [2024-07-26 12:25:32.217715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.119 [2024-07-26 12:25:32.217746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.119 [2024-07-26 12:25:32.217764] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.119 [2024-07-26 12:25:32.218002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.119 [2024-07-26 12:25:32.218255] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.119 [2024-07-26 12:25:32.218278] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.119 [2024-07-26 12:25:32.218294] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.119 [2024-07-26 12:25:32.221868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.231154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.231570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.231600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.231618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.231857] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.232109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.232133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.232148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.235747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.245029] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.245445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.245476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.245494] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.245731] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.245973] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.245996] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.246012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.249595] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.258873] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.259335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.259385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.259415] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.259724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.260039] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.260083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.260110] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.264265] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.272925] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.273351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.273385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.273404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.273643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.273886] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.273909] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.273925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.277505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.286780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.287201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.287232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.287250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.287490] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.287732] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.287755] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.287770] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.291390] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.300681] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.301094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.301126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.301145] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.301384] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.301634] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.301658] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.301673] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.305258] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.314537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.314973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.315003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.315021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.315271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.315515] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.315538] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.315554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.319139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.328436] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.328845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.328876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.328895] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.329143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.329385] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.329408] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.329424] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.332995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.342297] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.342738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.342769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.342786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.343025] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.120 [2024-07-26 12:25:32.343276] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.120 [2024-07-26 12:25:32.343300] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.120 [2024-07-26 12:25:32.343315] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.120 [2024-07-26 12:25:32.346896] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.120 [2024-07-26 12:25:32.356220] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.120 [2024-07-26 12:25:32.356653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.120 [2024-07-26 12:25:32.356684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.120 [2024-07-26 12:25:32.356701] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.120 [2024-07-26 12:25:32.356940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.121 [2024-07-26 12:25:32.357192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.121 [2024-07-26 12:25:32.357215] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.121 [2024-07-26 12:25:32.357231] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.121 [2024-07-26 12:25:32.360803] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.121 [2024-07-26 12:25:32.370102] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.121 [2024-07-26 12:25:32.370512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.121 [2024-07-26 12:25:32.370543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.121 [2024-07-26 12:25:32.370561] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.121 [2024-07-26 12:25:32.370799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.121 [2024-07-26 12:25:32.371049] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.121 [2024-07-26 12:25:32.371083] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.121 [2024-07-26 12:25:32.371100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.380 [2024-07-26 12:25:32.374676] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.380 [2024-07-26 12:25:32.383967] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.380 [2024-07-26 12:25:32.384417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.380 [2024-07-26 12:25:32.384448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.380 [2024-07-26 12:25:32.384466] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.380 [2024-07-26 12:25:32.384704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.380 [2024-07-26 12:25:32.384946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.380 [2024-07-26 12:25:32.384968] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.380 [2024-07-26 12:25:32.384984] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.380 [2024-07-26 12:25:32.388576] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.380 [2024-07-26 12:25:32.397890] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.380 [2024-07-26 12:25:32.398316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.380 [2024-07-26 12:25:32.398347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.380 [2024-07-26 12:25:32.398375] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.380 [2024-07-26 12:25:32.398615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.380 [2024-07-26 12:25:32.398858] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.380 [2024-07-26 12:25:32.398880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.380 [2024-07-26 12:25:32.398895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.380 [2024-07-26 12:25:32.402487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.380 [2024-07-26 12:25:32.411787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.380 [2024-07-26 12:25:32.412214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.380 [2024-07-26 12:25:32.412245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.380 [2024-07-26 12:25:32.412262] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.380 [2024-07-26 12:25:32.412501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.380 [2024-07-26 12:25:32.412743] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.412766] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.412781] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.416368] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.425658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.426070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.426101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.426119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.426357] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.426609] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.426631] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.426646] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.430234] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.439536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.439969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.439999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.440016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.440264] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.440508] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.440536] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.440552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.444143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.453432] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.453844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.453873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.453891] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.454138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.454381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.454404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.454419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.457996] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.467295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.467735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.467766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.467784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.468023] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.468274] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.468298] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.468313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.471892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.481194] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.481629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.481660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.481678] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.481916] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.482168] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.482192] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.482208] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.485785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.495111] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.495556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.495586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.495604] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.495843] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.496094] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.496117] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.496133] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.499730] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.509008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.509467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.509509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.509538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.509844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.510178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.510213] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.510240] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.514420] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.522981] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.523408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.523442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.523460] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.523700] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.523942] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.523965] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.523981] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.527586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.381 [2024-07-26 12:25:32.536874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.381 [2024-07-26 12:25:32.537320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.381 [2024-07-26 12:25:32.537351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.381 [2024-07-26 12:25:32.537369] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.381 [2024-07-26 12:25:32.537614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.381 [2024-07-26 12:25:32.537857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.381 [2024-07-26 12:25:32.537879] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.381 [2024-07-26 12:25:32.537895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.381 [2024-07-26 12:25:32.541481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.382 [2024-07-26 12:25:32.550770] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.382 [2024-07-26 12:25:32.551197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.382 [2024-07-26 12:25:32.551228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.382 [2024-07-26 12:25:32.551246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.382 [2024-07-26 12:25:32.551485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.382 [2024-07-26 12:25:32.551728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.382 [2024-07-26 12:25:32.551751] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.382 [2024-07-26 12:25:32.551767] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.382 [2024-07-26 12:25:32.555351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.382 [2024-07-26 12:25:32.564651] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.382 [2024-07-26 12:25:32.565093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.382 [2024-07-26 12:25:32.565124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.382 [2024-07-26 12:25:32.565142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.382 [2024-07-26 12:25:32.565381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.382 [2024-07-26 12:25:32.565623] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.382 [2024-07-26 12:25:32.565646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.382 [2024-07-26 12:25:32.565662] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.382 [2024-07-26 12:25:32.569245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.382 [2024-07-26 12:25:32.578536] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.382 [2024-07-26 12:25:32.578978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.382 [2024-07-26 12:25:32.579009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.382 [2024-07-26 12:25:32.579027] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.382 [2024-07-26 12:25:32.579274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.382 [2024-07-26 12:25:32.579517] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.382 [2024-07-26 12:25:32.579540] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.382 [2024-07-26 12:25:32.579561] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.382 [2024-07-26 12:25:32.583145] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.382 [2024-07-26 12:25:32.592460] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.382 [2024-07-26 12:25:32.592857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.382 [2024-07-26 12:25:32.592889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.382 [2024-07-26 12:25:32.592907] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.382 [2024-07-26 12:25:32.593163] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.382 [2024-07-26 12:25:32.593408] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.382 [2024-07-26 12:25:32.593431] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.382 [2024-07-26 12:25:32.593446] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.382 [2024-07-26 12:25:32.597020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.382 [2024-07-26 12:25:32.606347] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.382 [2024-07-26 12:25:32.606731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.382 [2024-07-26 12:25:32.606761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.382 [2024-07-26 12:25:32.606779] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.382 [2024-07-26 12:25:32.607017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.382 [2024-07-26 12:25:32.607271] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.382 [2024-07-26 12:25:32.607295] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.382 [2024-07-26 12:25:32.607311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.382 [2024-07-26 12:25:32.610886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.382 [2024-07-26 12:25:32.620398] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.382 [2024-07-26 12:25:32.620812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.382 [2024-07-26 12:25:32.620843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.382 [2024-07-26 12:25:32.620861] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.382 [2024-07-26 12:25:32.621109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.382 [2024-07-26 12:25:32.621352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.382 [2024-07-26 12:25:32.621375] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.382 [2024-07-26 12:25:32.621391] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.382 [2024-07-26 12:25:32.624965] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.634270] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.634700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.634731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.634749] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.634988] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.635241] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.635264] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.635280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.638860] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.648154] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.648570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.648600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.648618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.648856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.649109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.649132] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.649148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.652724] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.662008] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.662419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.662450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.662468] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.662707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.662949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.662972] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.662987] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.666570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.675854] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.676268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.676299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.676317] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.676561] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.676804] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.676826] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.676841] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.680426] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.689931] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.690356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.690387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.690405] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.690644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.690885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.690908] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.690923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.694509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.703787] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.704199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.704229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.704247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.704486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.704728] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.704750] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.704766] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.708371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.717653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.718070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.718103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.718122] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.718361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.718604] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.718627] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.718648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.722233] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.731517] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.731930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.731961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.731978] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.732227] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.732470] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.732493] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.732509] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.736088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.745362] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.745768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.643 [2024-07-26 12:25:32.745798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.643 [2024-07-26 12:25:32.745816] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.643 [2024-07-26 12:25:32.746054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.643 [2024-07-26 12:25:32.746307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.643 [2024-07-26 12:25:32.746330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.643 [2024-07-26 12:25:32.746345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.643 [2024-07-26 12:25:32.749915] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.643 [2024-07-26 12:25:32.759200] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.643 [2024-07-26 12:25:32.759651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.759693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.759723] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.760031] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.760354] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.760389] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.760415] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.764562] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.773115] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.773555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.773593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.773613] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.773852] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.774352] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.774377] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.774392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.777971] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.787045] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.787466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.787498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.787516] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.787755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.787997] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.788020] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.788035] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.791635] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.800913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.801335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.801367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.801385] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.801623] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.801865] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.801887] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.801903] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.805489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.814761] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.815197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.815228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.815246] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.815485] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.815733] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.815757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.815772] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.818812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.827966] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.828345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.828372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.828388] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.828596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.828817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.828836] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.828849] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.831897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.841191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.841667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.841694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.841710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.841951] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.842216] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.842238] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.842252] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.845252] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.854356] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.854796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.854822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.854838] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.855082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.855306] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.855327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.855341] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.858344] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.867618] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.868037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.868072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.868089] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.644 [2024-07-26 12:25:32.868329] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.644 [2024-07-26 12:25:32.868544] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.644 [2024-07-26 12:25:32.868563] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.644 [2024-07-26 12:25:32.868575] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.644 [2024-07-26 12:25:32.871568] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.644 [2024-07-26 12:25:32.880955] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.644 [2024-07-26 12:25:32.881406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.644 [2024-07-26 12:25:32.881434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.644 [2024-07-26 12:25:32.881451] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.645 [2024-07-26 12:25:32.881704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.645 [2024-07-26 12:25:32.881903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.645 [2024-07-26 12:25:32.881922] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.645 [2024-07-26 12:25:32.881934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.645 [2024-07-26 12:25:32.884962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.645 [2024-07-26 12:25:32.894486] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.645 [2024-07-26 12:25:32.894894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.645 [2024-07-26 12:25:32.894937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.645 [2024-07-26 12:25:32.894953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.645 [2024-07-26 12:25:32.895191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.905 [2024-07-26 12:25:32.895437] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.905 [2024-07-26 12:25:32.895457] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.905 [2024-07-26 12:25:32.895470] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.905 [2024-07-26 12:25:32.898542] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.905 [2024-07-26 12:25:32.907724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.905 [2024-07-26 12:25:32.908203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.905 [2024-07-26 12:25:32.908231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.905 [2024-07-26 12:25:32.908252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.905 [2024-07-26 12:25:32.908508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.905 [2024-07-26 12:25:32.908707] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.905 [2024-07-26 12:25:32.908726] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.905 [2024-07-26 12:25:32.908738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.905 [2024-07-26 12:25:32.911763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.905 [2024-07-26 12:25:32.920909] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.905 [2024-07-26 12:25:32.921367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.905 [2024-07-26 12:25:32.921408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.905 [2024-07-26 12:25:32.921424] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.905 [2024-07-26 12:25:32.921671] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.905 [2024-07-26 12:25:32.921870] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.905 [2024-07-26 12:25:32.921888] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.905 [2024-07-26 12:25:32.921901] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.905 [2024-07-26 12:25:32.924923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.905 [2024-07-26 12:25:32.934289] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.905 [2024-07-26 12:25:32.934702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.905 [2024-07-26 12:25:32.934730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.905 [2024-07-26 12:25:32.934746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.905 [2024-07-26 12:25:32.934989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.905 [2024-07-26 12:25:32.935242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.905 [2024-07-26 12:25:32.935265] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.905 [2024-07-26 12:25:32.935278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.905 [2024-07-26 12:25:32.938281] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.905 [2024-07-26 12:25:32.947600] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.905 [2024-07-26 12:25:32.947980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.905 [2024-07-26 12:25:32.948021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.905 [2024-07-26 12:25:32.948036] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.905 [2024-07-26 12:25:32.948286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:32.948540] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:32.948564] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:32.948577] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:32.951560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:32.960842] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:32.961318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:32.961360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:32.961376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:32.961614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:32.961812] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:32.961831] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:32.961844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:32.964833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:32.974108] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:32.974534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:32.974560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:32.974590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:32.974831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:32.975030] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:32.975048] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:32.975068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:32.978055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:32.987394] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:32.987765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:32.987807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:32.987822] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:32.988079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:32.988307] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:32.988327] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:32.988340] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:32.991332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:33.000660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:33.001076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:33.001105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:33.001121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:33.001361] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:33.001575] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:33.001594] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:33.001606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:33.004689] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:33.013978] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:33.014434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:33.014473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:33.014500] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:33.014800] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:33.015092] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:33.015122] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:33.015145] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:33.019321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:33.027321] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:33.027759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:33.027804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:33.027820] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:33.028066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:33.028293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:33.028314] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:33.028327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:33.031353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:33.040675] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:33.041122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:33.041151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:33.041167] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:33.041401] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:33.041616] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:33.041635] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:33.041648] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:33.044641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:33.053900] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:33.054391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:33.054419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:33.054450] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:33.054704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:33.054903] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:33.054921] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:33.054933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.906 [2024-07-26 12:25:33.057954] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.906 [2024-07-26 12:25:33.067790] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.906 [2024-07-26 12:25:33.068222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.906 [2024-07-26 12:25:33.068253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.906 [2024-07-26 12:25:33.068271] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.906 [2024-07-26 12:25:33.068510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.906 [2024-07-26 12:25:33.068753] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.906 [2024-07-26 12:25:33.068775] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.906 [2024-07-26 12:25:33.068790] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.072375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.907 [2024-07-26 12:25:33.081660] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.907 [2024-07-26 12:25:33.082172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.907 [2024-07-26 12:25:33.082203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.907 [2024-07-26 12:25:33.082221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.907 [2024-07-26 12:25:33.082460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.907 [2024-07-26 12:25:33.082701] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.907 [2024-07-26 12:25:33.082724] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.907 [2024-07-26 12:25:33.082749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.086334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.907 [2024-07-26 12:25:33.095658] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.907 [2024-07-26 12:25:33.096070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.907 [2024-07-26 12:25:33.096101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.907 [2024-07-26 12:25:33.096119] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.907 [2024-07-26 12:25:33.096358] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.907 [2024-07-26 12:25:33.096601] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.907 [2024-07-26 12:25:33.096623] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.907 [2024-07-26 12:25:33.096638] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.100236] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.907 [2024-07-26 12:25:33.109542] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.907 [2024-07-26 12:25:33.109958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.907 [2024-07-26 12:25:33.109989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.907 [2024-07-26 12:25:33.110007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.907 [2024-07-26 12:25:33.110254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.907 [2024-07-26 12:25:33.110497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.907 [2024-07-26 12:25:33.110520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.907 [2024-07-26 12:25:33.110535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.114124] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.907 [2024-07-26 12:25:33.123453] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.907 [2024-07-26 12:25:33.123918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.907 [2024-07-26 12:25:33.123950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.907 [2024-07-26 12:25:33.123968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.907 [2024-07-26 12:25:33.124217] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.907 [2024-07-26 12:25:33.124460] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.907 [2024-07-26 12:25:33.124482] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.907 [2024-07-26 12:25:33.124498] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.128090] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.907 [2024-07-26 12:25:33.137384] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.907 [2024-07-26 12:25:33.137895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.907 [2024-07-26 12:25:33.137925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.907 [2024-07-26 12:25:33.137943] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.907 [2024-07-26 12:25:33.138198] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.907 [2024-07-26 12:25:33.138441] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.907 [2024-07-26 12:25:33.138464] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.907 [2024-07-26 12:25:33.138480] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.142057] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:39.907 [2024-07-26 12:25:33.151371] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:39.907 [2024-07-26 12:25:33.151813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:39.907 [2024-07-26 12:25:33.151844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:39.907 [2024-07-26 12:25:33.151862] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:39.907 [2024-07-26 12:25:33.152113] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:39.907 [2024-07-26 12:25:33.152357] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:39.907 [2024-07-26 12:25:33.152380] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:39.907 [2024-07-26 12:25:33.152395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:39.907 [2024-07-26 12:25:33.155978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.165296] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.165706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.165736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.165754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.165993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.166246] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.166270] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.166285] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.169867] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.179161] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.179606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.179636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.179654] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.179893] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.180153] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.180178] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.180193] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.183769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.193079] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.193526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.193557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.193575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.193813] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.194056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.194090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.194105] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.197683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.206973] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.207411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.207442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.207459] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.207698] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.207940] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.207963] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.207978] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.211564] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.220852] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.221273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.221303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.221320] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.221559] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.221801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.221824] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.221839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.225433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.234724] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.235137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.235168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.235186] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.235424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.235666] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.235688] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.235703] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.239285] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.248574] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.248976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.249006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.249024] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.249272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.249514] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.249537] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.249552] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.253141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.262573] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.263025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.263057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.263086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.263326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.263568] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.263591] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.263606] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.267190] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.276749] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.277158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.277191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.277215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.277455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.277699] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.277722] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.277738] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.281321] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.290612] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.291041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.291080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.291111] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.291359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.291602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.291625] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.291640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.295242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.304533] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.304980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.305012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.305030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.305282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.305525] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.305548] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.305563] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.309147] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.318430] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.176 [2024-07-26 12:25:33.318879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.176 [2024-07-26 12:25:33.318910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.176 [2024-07-26 12:25:33.318928] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.176 [2024-07-26 12:25:33.319179] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.176 [2024-07-26 12:25:33.319428] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.176 [2024-07-26 12:25:33.319452] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.176 [2024-07-26 12:25:33.319467] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.176 [2024-07-26 12:25:33.323043] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.176 [2024-07-26 12:25:33.332357] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.332779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.332810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.332828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.333077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.333320] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.333344] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.333359] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.336937] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-07-26 12:25:33.346239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.346678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.346709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.346727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.346965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.347219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.347243] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.347258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.350835] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-07-26 12:25:33.360132] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.360548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.360579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.360597] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.360835] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.361088] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.361112] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.361128] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.364706] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-07-26 12:25:33.373992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.374434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.374464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.374482] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.374720] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.374963] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.374985] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.375000] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.378586] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-07-26 12:25:33.387872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.388317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.388348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.388365] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.388603] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.388846] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.388868] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.388884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.392481] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-07-26 12:25:33.401764] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.402203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.402235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.402253] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.402492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.402734] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.402757] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.402773] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.406365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.177 [2024-07-26 12:25:33.415650] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.177 [2024-07-26 12:25:33.416086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.177 [2024-07-26 12:25:33.416117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.177 [2024-07-26 12:25:33.416141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.177 [2024-07-26 12:25:33.416381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.177 [2024-07-26 12:25:33.416624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.177 [2024-07-26 12:25:33.416646] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.177 [2024-07-26 12:25:33.416661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.177 [2024-07-26 12:25:33.420251] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.429552] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.429966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.429997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.430015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.430265] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.430509] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.430531] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.430547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.434135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.443426] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.443839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.443869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.443887] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.444138] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.444381] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.444404] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.444419] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.447995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.457284] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.457697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.457727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.457745] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.457984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.458238] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.458267] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.458283] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.461858] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.471147] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.471582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.471612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.471630] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.471868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.472122] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.472146] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.472160] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.475738] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.485064] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.485511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.485542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.485559] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.485798] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.486040] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.486074] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.486092] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.489686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.499004] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.499455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.499487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.499504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.499742] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.499985] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.500008] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.500023] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.503609] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.512913] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.513313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.513344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.513362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.513600] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.513842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.513865] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.513880] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.517464] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.527037] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.527489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.527523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.527542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.527781] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.528024] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.528047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.528073] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.531653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.540963] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.541392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.541426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.541445] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.541684] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.541927] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.541949] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.541964] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.545554] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.554848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.555305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.555336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.555354] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.555598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.555842] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.555864] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.555879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.559461] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.568742] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.569151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.569182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.569200] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.569438] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.569680] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.569703] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.569718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.573304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.582653] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.583070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.583102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.583120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.583359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.583602] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.583624] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.583639] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.587235] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.596532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.597089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.597122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.597140] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.597379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.597621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.446 [2024-07-26 12:25:33.597643] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.446 [2024-07-26 12:25:33.597664] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.446 [2024-07-26 12:25:33.601248] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.446 [2024-07-26 12:25:33.610538] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.446 [2024-07-26 12:25:33.610947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.446 [2024-07-26 12:25:33.610979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.446 [2024-07-26 12:25:33.610997] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.446 [2024-07-26 12:25:33.611246] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.446 [2024-07-26 12:25:33.611489] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.611512] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.611527] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.447 [2024-07-26 12:25:33.615112] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.447 [2024-07-26 12:25:33.624405] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.447 [2024-07-26 12:25:33.624839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.447 [2024-07-26 12:25:33.624869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.447 [2024-07-26 12:25:33.624888] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.447 [2024-07-26 12:25:33.625135] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.447 [2024-07-26 12:25:33.625379] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.625402] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.625417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.447 [2024-07-26 12:25:33.628991] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.447 [2024-07-26 12:25:33.638276] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.447 [2024-07-26 12:25:33.638759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.447 [2024-07-26 12:25:33.638789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.447 [2024-07-26 12:25:33.638807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.447 [2024-07-26 12:25:33.639046] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.447 [2024-07-26 12:25:33.639298] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.639321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.639336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.447 [2024-07-26 12:25:33.642906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.447 [2024-07-26 12:25:33.652212] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.447 [2024-07-26 12:25:33.652620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.447 [2024-07-26 12:25:33.652656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.447 [2024-07-26 12:25:33.652675] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.447 [2024-07-26 12:25:33.652913] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.447 [2024-07-26 12:25:33.653164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.653191] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.653207] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.447 [2024-07-26 12:25:33.656776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.447 [2024-07-26 12:25:33.666090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.447 [2024-07-26 12:25:33.666530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.447 [2024-07-26 12:25:33.666559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.447 [2024-07-26 12:25:33.666577] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.447 [2024-07-26 12:25:33.666815] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.447 [2024-07-26 12:25:33.667056] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.667090] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.667106] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.447 [2024-07-26 12:25:33.670685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.447 [2024-07-26 12:25:33.680005] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.447 [2024-07-26 12:25:33.680590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.447 [2024-07-26 12:25:33.680640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.447 [2024-07-26 12:25:33.680659] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.447 [2024-07-26 12:25:33.680897] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.447 [2024-07-26 12:25:33.681149] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.681179] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.681194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.447 [2024-07-26 12:25:33.684775] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.447 [2024-07-26 12:25:33.693915] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.447 [2024-07-26 12:25:33.694341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.447 [2024-07-26 12:25:33.694372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.447 [2024-07-26 12:25:33.694391] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.447 [2024-07-26 12:25:33.694629] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.447 [2024-07-26 12:25:33.694878] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.447 [2024-07-26 12:25:33.694901] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.447 [2024-07-26 12:25:33.694916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.698512] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.707816] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.708277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.708308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.708326] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.708564] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.708807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.708829] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.708845] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.712433] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.721722] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.722148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.722179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.722197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.722436] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.722678] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.722701] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.722716] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.726300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.735598] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.736007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.736037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.736055] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.736305] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.736547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.736570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.736585] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.740181] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.749500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.750007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.750039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.750057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.750306] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.750547] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.750570] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.750584] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.754187] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.763504] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.763922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.763952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.763970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.764218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.764461] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.764485] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.764500] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.768089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.777644] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.778104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.778140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.778159] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.778404] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.778647] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.778669] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.778685] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.782270] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.791562] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.792005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.792036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.792073] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.792316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.792558] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.792581] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.792596] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.796459] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.805537] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.805973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.806004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.806022] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.806270] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.806512] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.806535] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.806550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.810134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.819422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.819848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.819880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.819898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.820148] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.820391] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.820415] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.820430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.824005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.833295] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.712 [2024-07-26 12:25:33.833870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.712 [2024-07-26 12:25:33.833939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.712 [2024-07-26 12:25:33.833957] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.712 [2024-07-26 12:25:33.834206] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.712 [2024-07-26 12:25:33.834449] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.712 [2024-07-26 12:25:33.834477] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.712 [2024-07-26 12:25:33.834493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.712 [2024-07-26 12:25:33.838104] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.712 [2024-07-26 12:25:33.847189] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.847607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.847638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.847656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.847895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.848148] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.848172] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.848187] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.851765] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.861049] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.861499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.861530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.861547] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.861786] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.862028] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.862050] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.862077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.865655] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.874936] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.875374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.875405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.875423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.875662] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.875904] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.875927] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.875942] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.879530] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.888829] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.889271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.889302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.889319] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.889558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.889800] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.889823] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.889838] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.893425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.902744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.903160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.903191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.903209] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.903447] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.903689] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.903712] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.903727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.907322] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.916604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.917047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.917085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.917103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.917343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.917585] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.917607] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.917622] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.921209] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.930500] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.930933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.930963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.930987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.931238] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.931481] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.931504] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.931519] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.935102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.944393] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.944826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.944856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.944874] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.945124] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.945368] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.945391] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.945406] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.713 [2024-07-26 12:25:33.948982] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.713 [2024-07-26 12:25:33.958302] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.713 [2024-07-26 12:25:33.958749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.713 [2024-07-26 12:25:33.958780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.713 [2024-07-26 12:25:33.958797] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.713 [2024-07-26 12:25:33.959036] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.713 [2024-07-26 12:25:33.959287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.713 [2024-07-26 12:25:33.959311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.713 [2024-07-26 12:25:33.959327] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:33.962910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:33.972225] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:33.972660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:33.972690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:33.972707] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:33.972946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:33.973201] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:33.973230] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:33.973246] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:33.976830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:33.986145] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:33.986704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:33.986765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:33.986783] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:33.987022] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:33.987288] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:33.987311] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:33.987326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:33.990908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.000013] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.000429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.000460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.000478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.000716] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.000958] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.000981] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.000996] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.004581] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.013865] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.014259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.014290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.014308] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.014546] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.014788] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.014810] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.014825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.018408] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.027968] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.028431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.028464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.028483] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.028722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.028965] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.028988] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.029004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.032592] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.041884] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.042327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.042358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.042376] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.042615] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.042857] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.042880] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.042896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.046490] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.055780] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.056220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.056251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.056269] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.056507] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.056750] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.056773] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.056788] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.060375] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.069672] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.070154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.070186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.070204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.070448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.070691] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.070714] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.070730] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.074314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.083608] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.975 [2024-07-26 12:25:34.084038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.975 [2024-07-26 12:25:34.084077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.975 [2024-07-26 12:25:34.084096] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.975 [2024-07-26 12:25:34.084335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.975 [2024-07-26 12:25:34.084577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.975 [2024-07-26 12:25:34.084599] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.975 [2024-07-26 12:25:34.084615] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.975 [2024-07-26 12:25:34.088201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.975 [2024-07-26 12:25:34.097526] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.097938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.097970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.097988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.098235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.098479] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.098502] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.098516] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.102097] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.111373] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.111889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.111919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.111937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.112188] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.112431] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.112453] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.112474] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.116049] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.125334] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.125802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.125833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.125851] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.126099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.126342] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.126365] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.126380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.129948] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.139231] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.139638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.139668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.139686] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.139924] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.140187] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.140211] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.140227] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.143802] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.153090] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.153500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.153532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.153549] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.153789] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.154031] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.154054] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.154086] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.157665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.166962] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.167418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.167455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.167474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.167712] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.167954] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.167977] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.167992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.171574] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.180848] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.181288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.181319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.181337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.181575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.181817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.181840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.181855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.185438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.194727] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.195172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.195204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.195221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.195459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.195712] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.195736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.195751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.199337] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.208621] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.209052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.209090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.209108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.976 [2024-07-26 12:25:34.209346] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.976 [2024-07-26 12:25:34.209594] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.976 [2024-07-26 12:25:34.209617] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.976 [2024-07-26 12:25:34.209632] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.976 [2024-07-26 12:25:34.213215] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:40.976 [2024-07-26 12:25:34.222532] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:40.976 [2024-07-26 12:25:34.222955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:40.976 [2024-07-26 12:25:34.222987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:40.976 [2024-07-26 12:25:34.223005] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:40.977 [2024-07-26 12:25:34.223254] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:40.977 [2024-07-26 12:25:34.223497] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:40.977 [2024-07-26 12:25:34.223520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:40.977 [2024-07-26 12:25:34.223535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:40.977 [2024-07-26 12:25:34.227117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.234 [2024-07-26 12:25:34.236413] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.234 [2024-07-26 12:25:34.236865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.234 [2024-07-26 12:25:34.236897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.234 [2024-07-26 12:25:34.236915] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.234 [2024-07-26 12:25:34.237166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.234 [2024-07-26 12:25:34.237409] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.234 [2024-07-26 12:25:34.237432] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.234 [2024-07-26 12:25:34.237447] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.234 [2024-07-26 12:25:34.241022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.234 [2024-07-26 12:25:34.250311] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.234 [2024-07-26 12:25:34.250754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.234 [2024-07-26 12:25:34.250784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.234 [2024-07-26 12:25:34.250802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.234 [2024-07-26 12:25:34.251040] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.234 [2024-07-26 12:25:34.251293] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.234 [2024-07-26 12:25:34.251316] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.234 [2024-07-26 12:25:34.251331] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.234 [2024-07-26 12:25:34.254910] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.234 [2024-07-26 12:25:34.264191] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.234 [2024-07-26 12:25:34.264632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.234 [2024-07-26 12:25:34.264662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.234 [2024-07-26 12:25:34.264679] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.264917] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.265169] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.265193] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.265209] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.268782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.278348] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.278800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.278833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.278852] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.279101] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.279345] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.279368] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.279384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.282957] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.292263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.292677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.292709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.292727] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.292965] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.293219] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.293242] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.293257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.296847] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.306125] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.306561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.306592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.306616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.306856] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.307109] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.307133] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.307148] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.310722] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.319996] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.320447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.320478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.320496] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.320734] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.320977] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.321000] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.321015] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.324598] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.333872] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.334278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.334309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.334327] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.334565] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.334807] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.334830] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.334846] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.338430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.347730] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.348144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.348176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.348195] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.348434] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.348676] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.348704] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.348720] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.352305] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.361582] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.362014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.362045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.362072] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.362314] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.362556] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.362579] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.362594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.366175] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.375472] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.235 [2024-07-26 12:25:34.375909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.235 [2024-07-26 12:25:34.375940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.235 [2024-07-26 12:25:34.375958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.235 [2024-07-26 12:25:34.376209] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.235 [2024-07-26 12:25:34.376452] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.235 [2024-07-26 12:25:34.376475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.235 [2024-07-26 12:25:34.376490] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.235 [2024-07-26 12:25:34.380073] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.235 [2024-07-26 12:25:34.389354] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.389783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.389813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.389831] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.390079] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.390322] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.390345] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.390360] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.393935] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.403235] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.403671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.403701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.403719] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.403957] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.404211] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.404235] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.404250] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.407828] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.417114] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.417518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.417548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.417566] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.417804] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.418046] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.418078] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.418094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.421665] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.430945] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.431356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.431386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.431404] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.431643] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.431885] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.431907] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.431922] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.435505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.444805] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.445224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.445254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.445272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.445516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.445758] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.445781] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.445796] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.449380] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.458663] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.459093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.459123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.459141] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.459379] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.459621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.459644] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.459659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.463242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.472520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.472959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.472990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.473007] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.473255] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.473498] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.473520] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.473535] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.236 [2024-07-26 12:25:34.477116] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.236 [2024-07-26 12:25:34.486404] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.236 [2024-07-26 12:25:34.486837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.236 [2024-07-26 12:25:34.486867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.236 [2024-07-26 12:25:34.486885] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.236 [2024-07-26 12:25:34.487139] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.236 [2024-07-26 12:25:34.487382] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.236 [2024-07-26 12:25:34.487405] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.236 [2024-07-26 12:25:34.487429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.495 [2024-07-26 12:25:34.491006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.495 [2024-07-26 12:25:34.500309] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.495 [2024-07-26 12:25:34.500719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.495 [2024-07-26 12:25:34.500750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.495 [2024-07-26 12:25:34.500768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.495 [2024-07-26 12:25:34.501006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.495 [2024-07-26 12:25:34.501259] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.495 [2024-07-26 12:25:34.501283] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.495 [2024-07-26 12:25:34.501299] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.495 [2024-07-26 12:25:34.504872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.495 [2024-07-26 12:25:34.514157] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.495 [2024-07-26 12:25:34.514587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.495 [2024-07-26 12:25:34.514617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.495 [2024-07-26 12:25:34.514635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.495 [2024-07-26 12:25:34.514873] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.495 [2024-07-26 12:25:34.515125] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.495 [2024-07-26 12:25:34.515149] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.495 [2024-07-26 12:25:34.515165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.495 [2024-07-26 12:25:34.518740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.495 [2024-07-26 12:25:34.528018] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.495 [2024-07-26 12:25:34.528472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.495 [2024-07-26 12:25:34.528515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.495 [2024-07-26 12:25:34.528544] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.495 [2024-07-26 12:25:34.528850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.495 [2024-07-26 12:25:34.529178] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.495 [2024-07-26 12:25:34.529212] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.495 [2024-07-26 12:25:34.529241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.495 [2024-07-26 12:25:34.533374] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.495 [2024-07-26 12:25:34.542036] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.495 [2024-07-26 12:25:34.542486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.495 [2024-07-26 12:25:34.542519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.495 [2024-07-26 12:25:34.542538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.495 [2024-07-26 12:25:34.542778] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.495 [2024-07-26 12:25:34.543021] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.495 [2024-07-26 12:25:34.543044] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.495 [2024-07-26 12:25:34.543068] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.495 [2024-07-26 12:25:34.546647] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.495 [2024-07-26 12:25:34.555923] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.495 [2024-07-26 12:25:34.556340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.495 [2024-07-26 12:25:34.556372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.495 [2024-07-26 12:25:34.556390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.495 [2024-07-26 12:25:34.556628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.495 [2024-07-26 12:25:34.556871] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.495 [2024-07-26 12:25:34.556893] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.556908] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.560493] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.569772] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.570183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.570214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.570232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.570470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.570713] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.570736] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.570751] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.574334] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.583630] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.584077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.584109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.584127] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.584371] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.584614] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.584637] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.584653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.588243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.597534] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.597938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.597969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.597988] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.598237] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.598480] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.598503] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.598518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.602100] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.611377] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.611758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.611789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.611806] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.612045] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.612297] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.612321] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.612336] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.615909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.625403] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.625810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.625841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.625859] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.626108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.626351] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.626374] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.626395] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.629970] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.639263] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.639704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.639734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.639752] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.639989] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.640242] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.640266] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.640281] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.643871] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.653171] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.653632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.653680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.653698] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.653948] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.654210] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.654234] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.654249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.657823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.667137] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.667571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.667602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.667619] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.667858] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.668112] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.668135] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.668150] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.671726] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.681010] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.681477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.496 [2024-07-26 12:25:34.681531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.496 [2024-07-26 12:25:34.681550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.496 [2024-07-26 12:25:34.681788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.496 [2024-07-26 12:25:34.682029] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.496 [2024-07-26 12:25:34.682052] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.496 [2024-07-26 12:25:34.682077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.496 [2024-07-26 12:25:34.685657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.496 [2024-07-26 12:25:34.694956] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.496 [2024-07-26 12:25:34.695480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.497 [2024-07-26 12:25:34.695533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.497 [2024-07-26 12:25:34.695550] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.497 [2024-07-26 12:25:34.695788] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.497 [2024-07-26 12:25:34.696032] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.497 [2024-07-26 12:25:34.696055] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.497 [2024-07-26 12:25:34.696081] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.497 [2024-07-26 12:25:34.699684] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.497 [2024-07-26 12:25:34.708974] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.497 [2024-07-26 12:25:34.709368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.497 [2024-07-26 12:25:34.709398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.497 [2024-07-26 12:25:34.709416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.497 [2024-07-26 12:25:34.709654] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.497 [2024-07-26 12:25:34.709896] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.497 [2024-07-26 12:25:34.709918] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.497 [2024-07-26 12:25:34.709933] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.497 [2024-07-26 12:25:34.713518] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.497 [2024-07-26 12:25:34.723017] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.497 [2024-07-26 12:25:34.723455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.497 [2024-07-26 12:25:34.723486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.497 [2024-07-26 12:25:34.723504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.497 [2024-07-26 12:25:34.723743] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.497 [2024-07-26 12:25:34.723991] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.497 [2024-07-26 12:25:34.724014] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.497 [2024-07-26 12:25:34.724029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.497 [2024-07-26 12:25:34.727610] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.497 [2024-07-26 12:25:34.736892] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.497 [2024-07-26 12:25:34.737288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.497 [2024-07-26 12:25:34.737319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.497 [2024-07-26 12:25:34.737337] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.497 [2024-07-26 12:25:34.737575] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.497 [2024-07-26 12:25:34.737817] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.497 [2024-07-26 12:25:34.737840] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.497 [2024-07-26 12:25:34.737855] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.497 [2024-07-26 12:25:34.741444] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.497 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 2974621 Killed "${NVMF_APP[@]}" "$@" 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=2975693 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 2975693 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@831 -- # '[' -z 2975693 ']' 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:41.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:41.497 12:25:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:41.757 [2024-07-26 12:25:34.750942] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.751342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.751373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.751390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.751628] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.751876] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.751900] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.751916] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.755501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-26 12:25:34.764989] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.765387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.765417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.765434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.765672] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.765914] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.765937] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.765953] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.769536] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-26 12:25:34.778491] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.778926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.778965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.778993] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.779286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.779591] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.779619] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.779641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.783416] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-26 12:25:34.791921] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.792368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.792399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.792416] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.792661] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.792867] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.792886] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.792900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.795317] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:41.757 [2024-07-26 12:25:34.795398] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:41.757 [2024-07-26 12:25:34.796135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-26 12:25:34.805452] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.805931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.805959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.805975] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.806199] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.806455] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.806475] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.806488] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.809560] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-26 12:25:34.818879] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.819319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.819355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.819372] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.819616] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.819822] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.819841] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.819854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.823078] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.757 [2024-07-26 12:25:34.832277] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.832763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.757 [2024-07-26 12:25:34.832791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.757 [2024-07-26 12:25:34.832808] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.757 [2024-07-26 12:25:34.833051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.757 [2024-07-26 12:25:34.833287] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.757 [2024-07-26 12:25:34.833308] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.757 [2024-07-26 12:25:34.833321] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.757 [2024-07-26 12:25:34.836782] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.757 [2024-07-26 12:25:34.845641] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.757 [2024-07-26 12:25:34.846129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.846158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.846174] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.846415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.846621] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.846640] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.846653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.849763] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.859084] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.859475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.859503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.859519] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.859761] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.859967] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.859986] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.859999] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.863092] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.865993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:24:41.758 [2024-07-26 12:25:34.872422] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.873037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.873098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.873118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.873368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.873577] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.873598] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.873613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.876745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.885874] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.886426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.886470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.886490] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.886737] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.886946] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.886966] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.886992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.890120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.899301] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.899764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.899792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.899809] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.900066] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.900292] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.900313] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.900326] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.903449] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.912759] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.913177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.913206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.913222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.913465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.913671] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.913690] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.913704] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.916785] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.926268] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.926779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.926815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.926833] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.927091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.927309] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.927330] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.927345] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.930456] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.939737] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.940296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.940330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.940348] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.940593] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.940801] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.940821] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.940836] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.943829] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.953172] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.953644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.953672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.758 [2024-07-26 12:25:34.953688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.758 [2024-07-26 12:25:34.953931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.758 [2024-07-26 12:25:34.954164] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.758 [2024-07-26 12:25:34.954185] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.758 [2024-07-26 12:25:34.954199] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.758 [2024-07-26 12:25:34.957328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.758 [2024-07-26 12:25:34.966570] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.758 [2024-07-26 12:25:34.966974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.758 [2024-07-26 12:25:34.967000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.759 [2024-07-26 12:25:34.967015] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.759 [2024-07-26 12:25:34.967281] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.759 [2024-07-26 12:25:34.967506] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.759 [2024-07-26 12:25:34.967526] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.759 [2024-07-26 12:25:34.967540] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.759 [2024-07-26 12:25:34.970616] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.759 [2024-07-26 12:25:34.975178] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:41.759 [2024-07-26 12:25:34.975208] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:41.759 [2024-07-26 12:25:34.975222] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:41.759 [2024-07-26 12:25:34.975233] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:41.759 [2024-07-26 12:25:34.975243] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:41.759 [2024-07-26 12:25:34.978093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.759 [2024-07-26 12:25:34.978121] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:24:41.759 [2024-07-26 12:25:34.978124] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:41.759 [2024-07-26 12:25:34.980133] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.759 [2024-07-26 12:25:34.980548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.759 [2024-07-26 12:25:34.980576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.759 [2024-07-26 12:25:34.980592] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.759 [2024-07-26 12:25:34.980808] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.759 [2024-07-26 12:25:34.981027] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.759 [2024-07-26 12:25:34.981047] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.759 [2024-07-26 12:25:34.981069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.759 [2024-07-26 12:25:34.984253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.759 [2024-07-26 12:25:34.993604] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.759 [2024-07-26 12:25:34.994179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.759 [2024-07-26 12:25:34.994220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.759 [2024-07-26 12:25:34.994239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.759 [2024-07-26 12:25:34.994465] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.759 [2024-07-26 12:25:34.994688] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.759 [2024-07-26 12:25:34.994711] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.759 [2024-07-26 12:25:34.994729] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:41.759 [2024-07-26 12:25:34.997944] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:41.759 [2024-07-26 12:25:35.007358] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:41.759 [2024-07-26 12:25:35.007933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:41.759 [2024-07-26 12:25:35.007977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:41.759 [2024-07-26 12:25:35.007998] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:41.759 [2024-07-26 12:25:35.008233] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:41.759 [2024-07-26 12:25:35.008466] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:41.759 [2024-07-26 12:25:35.008488] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:41.759 [2024-07-26 12:25:35.008505] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.020 [2024-07-26 12:25:35.011804] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.020 [2024-07-26 12:25:35.020887] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.020 [2024-07-26 12:25:35.021468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.020 [2024-07-26 12:25:35.021511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.020 [2024-07-26 12:25:35.021533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.020 [2024-07-26 12:25:35.021776] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.020 [2024-07-26 12:25:35.021993] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.020 [2024-07-26 12:25:35.022015] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.020 [2024-07-26 12:25:35.022032] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.020 [2024-07-26 12:25:35.025266] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.020 [2024-07-26 12:25:35.034520] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.020 [2024-07-26 12:25:35.035093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.020 [2024-07-26 12:25:35.035145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.020 [2024-07-26 12:25:35.035177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.020 [2024-07-26 12:25:35.035470] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.020 [2024-07-26 12:25:35.035768] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.020 [2024-07-26 12:25:35.035798] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.020 [2024-07-26 12:25:35.035824] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.020 [2024-07-26 12:25:35.040014] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.020 [2024-07-26 12:25:35.047992] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.020 [2024-07-26 12:25:35.048582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.020 [2024-07-26 12:25:35.048628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.020 [2024-07-26 12:25:35.048650] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.020 [2024-07-26 12:25:35.048889] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.020 [2024-07-26 12:25:35.049137] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.020 [2024-07-26 12:25:35.049160] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.020 [2024-07-26 12:25:35.049179] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.020 [2024-07-26 12:25:35.052470] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.020 [2024-07-26 12:25:35.061488] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.020 [2024-07-26 12:25:35.062087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.020 [2024-07-26 12:25:35.062129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.020 [2024-07-26 12:25:35.062150] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.020 [2024-07-26 12:25:35.062393] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.020 [2024-07-26 12:25:35.062611] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.020 [2024-07-26 12:25:35.062633] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.020 [2024-07-26 12:25:35.062651] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.020 [2024-07-26 12:25:35.065813] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.020 [2024-07-26 12:25:35.075031] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.020 [2024-07-26 12:25:35.075494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.020 [2024-07-26 12:25:35.075523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.020 [2024-07-26 12:25:35.075540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.020 [2024-07-26 12:25:35.075755] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.020 [2024-07-26 12:25:35.075983] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.076003] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.076016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.079278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 [2024-07-26 12:25:35.088623] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.089010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.089038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.089054] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.089278] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 [2024-07-26 12:25:35.089496] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.089517] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.089531] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.092686] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # return 0 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.021 [2024-07-26 12:25:35.102076] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.102463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.102491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.102507] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.102722] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 [2024-07-26 12:25:35.102949] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.102970] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.102983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.106201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 [2024-07-26 12:25:35.115744] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.116121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.116150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.116166] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.116383] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 [2024-07-26 12:25:35.116610] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.116630] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.116644] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.119857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.021 [2024-07-26 12:25:35.124880] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:42.021 [2024-07-26 12:25:35.129239] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.129666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.129694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.129710] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.129939] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 [2024-07-26 12:25:35.130192] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.130214] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.130234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.133458] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 [2024-07-26 12:25:35.142735] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.143147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.143175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.143191] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.143407] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.021 [2024-07-26 12:25:35.143624] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:42.021 [2024-07-26 12:25:35.143645] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.143659] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.021 [2024-07-26 12:25:35.146909] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 [2024-07-26 12:25:35.156408] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.157019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.157067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.157091] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.157326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 [2024-07-26 12:25:35.157561] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.157583] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.157600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.160879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 Malloc0 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.021 [2024-07-26 12:25:35.169961] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.170500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.021 [2024-07-26 12:25:35.170533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.021 [2024-07-26 12:25:35.170552] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.021 [2024-07-26 12:25:35.170775] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.021 [2024-07-26 12:25:35.171011] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.021 [2024-07-26 12:25:35.171033] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.021 [2024-07-26 12:25:35.171050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.021 [2024-07-26 12:25:35.174332] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.021 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.021 [2024-07-26 12:25:35.183622] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.021 [2024-07-26 12:25:35.184016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:42.022 [2024-07-26 12:25:35.184043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1ee9ac0 with addr=10.0.0.2, port=4420 00:24:42.022 [2024-07-26 12:25:35.184066] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ee9ac0 is same with the state(5) to be set 00:24:42.022 [2024-07-26 12:25:35.184283] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ee9ac0 (9): Bad file descriptor 00:24:42.022 [2024-07-26 12:25:35.184501] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:24:42.022 [2024-07-26 12:25:35.184522] nvme_ctrlr.c:1822:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:24:42.022 [2024-07-26 12:25:35.184537] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:24:42.022 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.022 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:42.022 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:42.022 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:42.022 [2024-07-26 12:25:35.187818] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:42.022 [2024-07-26 12:25:35.188569] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:42.022 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:42.022 12:25:35 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 2974912 00:24:42.022 [2024-07-26 12:25:35.197258] nvme_ctrlr.c:1724:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:24:42.282 [2024-07-26 12:25:35.387163] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:24:52.258 00:24:52.258 Latency(us) 00:24:52.258 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:52.258 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:24:52.258 Verification LBA range: start 0x0 length 0x4000 00:24:52.258 Nvme1n1 : 15.02 6581.24 25.71 9090.38 0.00 8142.69 892.02 22524.97 00:24:52.258 =================================================================================================================== 00:24:52.258 Total : 6581.24 25.71 9090.38 0.00 8142.69 892.02 22524.97 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:52.258 rmmod nvme_tcp 00:24:52.258 rmmod nvme_fabrics 00:24:52.258 rmmod nvme_keyring 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 2975693 ']' 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 2975693 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@950 -- # '[' -z 2975693 ']' 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # kill -0 2975693 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # uname 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2975693 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:24:52.258 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2975693' 00:24:52.258 killing process with pid 2975693 00:24:52.259 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@969 -- # kill 2975693 00:24:52.259 12:25:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@974 -- # wait 2975693 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:52.259 12:25:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:54.166 00:24:54.166 real 0m22.883s 00:24:54.166 user 1m1.421s 00:24:54.166 sys 0m4.211s 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:24:54.166 ************************************ 00:24:54.166 END TEST nvmf_bdevperf 00:24:54.166 ************************************ 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:54.166 ************************************ 00:24:54.166 START TEST nvmf_target_disconnect 00:24:54.166 ************************************ 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:24:54.166 * Looking for test storage... 00:24:54.166 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:24:54.166 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:24:54.167 12:25:47 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:24:56.073 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:24:56.073 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:24:56.073 Found net devices under 0000:0a:00.0: cvl_0_0 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:24:56.073 Found net devices under 0000:0a:00.1: cvl_0_1 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:56.073 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:56.073 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.244 ms 00:24:56.073 00:24:56.073 --- 10.0.0.2 ping statistics --- 00:24:56.073 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.073 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:24:56.073 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:56.073 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:56.073 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.208 ms 00:24:56.073 00:24:56.074 --- 10.0.0.1 ping statistics --- 00:24:56.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:56.074 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:56.074 ************************************ 00:24:56.074 START TEST nvmf_target_disconnect_tc1 00:24:56.074 ************************************ 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc1 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # local es=0 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@638 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:24:56.074 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:56.333 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.333 [2024-07-26 12:25:49.394078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:56.333 [2024-07-26 12:25:49.394145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x14011a0 with addr=10.0.0.2, port=4420 00:24:56.333 [2024-07-26 12:25:49.394185] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:56.333 [2024-07-26 12:25:49.394207] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:56.333 [2024-07-26 12:25:49.394222] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:24:56.333 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:24:56.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:24:56.333 Initializing NVMe Controllers 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@653 -- # es=1 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:24:56.333 00:24:56.333 real 0m0.094s 00:24:56.333 user 0m0.038s 00:24:56.333 sys 0m0.056s 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:24:56.333 ************************************ 00:24:56.333 END TEST nvmf_target_disconnect_tc1 00:24:56.333 ************************************ 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1107 -- # xtrace_disable 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:24:56.333 ************************************ 00:24:56.333 START TEST nvmf_target_disconnect_tc2 00:24:56.333 ************************************ 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1125 -- # nvmf_target_disconnect_tc2 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2978740 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2978740 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2978740 ']' 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:56.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:24:56.333 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.333 [2024-07-26 12:25:49.509806] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:24:56.333 [2024-07-26 12:25:49.509900] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:56.333 EAL: No free 2048 kB hugepages reported on node 1 00:24:56.333 [2024-07-26 12:25:49.577113] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:24:56.592 [2024-07-26 12:25:49.685814] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:56.592 [2024-07-26 12:25:49.685868] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:56.592 [2024-07-26 12:25:49.685891] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:56.592 [2024-07-26 12:25:49.685907] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:56.592 [2024-07-26 12:25:49.685917] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:56.592 [2024-07-26 12:25:49.686013] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:24:56.592 [2024-07-26 12:25:49.686083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:24:56.592 [2024-07-26 12:25:49.686141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:24:56.592 [2024-07-26 12:25:49.686144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.592 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.851 Malloc0 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.851 [2024-07-26 12:25:49.849883] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.851 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.852 [2024-07-26 12:25:49.878188] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=2978878 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:24:56.852 12:25:49 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:24:56.852 EAL: No free 2048 kB hugepages reported on node 1 00:24:58.765 12:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 2978740 00:24:58.765 12:25:51 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Write completed with error (sct=0, sc=8) 00:24:58.765 starting I/O failed 00:24:58.765 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 [2024-07-26 12:25:51.902737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 [2024-07-26 12:25:51.903042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 [2024-07-26 12:25:51.903382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Write completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.766 starting I/O failed 00:24:58.766 Read completed with error (sct=0, sc=8) 00:24:58.767 starting I/O failed 00:24:58.767 Read completed with error (sct=0, sc=8) 00:24:58.767 starting I/O failed 00:24:58.767 [2024-07-26 12:25:51.903706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:24:58.767 [2024-07-26 12:25:51.903954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.904009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.904185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.904213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.904356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.904382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.904544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.904571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.904710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.904751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.904940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.904968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.905164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.905190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.905328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.905355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.905521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.905563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.905917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.905984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.906175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.906202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.906361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.906387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.906536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.906563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.906763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.906805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.906976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.907005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.907198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.907226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.907376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.907402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.907595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.907622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.907758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.907784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.907926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.907952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.908126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.908157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.908289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.908315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.908567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.908593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.908765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.908794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.908971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.908997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.909212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.909240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.909401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.909441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.909571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.909595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.909794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.909822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.910003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.910029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.910165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.910191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.910347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.910388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.910552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.910580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.767 [2024-07-26 12:25:51.910751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.767 [2024-07-26 12:25:51.910791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.767 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.911042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.911076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.911207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.911234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.911478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.911502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.911821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.911881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.912077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.912121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.912256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.912282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.912433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.912459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.912586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.912627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.912789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.912817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.913034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.913068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.913202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.913229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.913358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.913384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.913558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.913584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.914130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.914157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.914296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.914322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.914454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.914480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.914630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.914655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.914809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.914835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.914988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.915014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.915141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.915168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.915294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.915320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.915503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.915543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.915747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.915772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.915938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.915966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.916137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.916163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.916285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.916312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.916446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.916476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.916626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.916652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.916836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.916880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.917037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.917073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.917222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.917261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.917457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.917485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.917664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.768 [2024-07-26 12:25:51.917690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.768 qpair failed and we were unable to recover it. 00:24:58.768 [2024-07-26 12:25:51.917849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.917874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.918031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.918057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.918190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.918216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.918380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.918424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.918577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.918603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.918757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.918783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.918936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.918962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.919102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.919129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.919298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.919337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.919521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.919548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.919704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.919730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.919857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.919884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.920040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.920073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.920212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.920239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.920433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.920459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.920615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.920641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.920775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.920802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.920961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.920988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.921119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.921146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.921277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.921304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.921470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.921497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.921693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.921719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.921871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.921897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.922130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.922157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.922314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.922357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.922537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.922563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.922688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.922714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.922906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.922933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.923055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.923088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.923250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.923276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.923398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.923425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.923572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.923598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.923748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.923774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.923931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.769 [2024-07-26 12:25:51.923961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.769 qpair failed and we were unable to recover it. 00:24:58.769 [2024-07-26 12:25:51.924119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.924146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.924280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.924306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.924465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.924492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.924645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.924671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.924837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.924863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.924993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.925020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.925160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.925186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.925337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.925363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.925507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.925533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.925689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.925715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.925871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.925897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.926021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.926047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.926173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.926199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.926323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.926349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.926500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.926526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.926714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.926740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.926873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.926899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.927099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.927139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.927302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.927330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.927457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.927483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.927685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.927749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.927928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.927953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.928077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.928104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.928260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.928286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.928441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.928467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.928620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.928646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.928864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.928891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.770 qpair failed and we were unable to recover it. 00:24:58.770 [2024-07-26 12:25:51.929044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.770 [2024-07-26 12:25:51.929085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.929210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.929238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.929368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.929394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.929584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.929611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.929803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.929832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.930010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.930041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.930221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.930247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.930428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.930454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.930577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.930604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.930796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.930823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.930978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.931005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.931184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.931225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.931386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.931419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.931578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.931605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.931782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.931808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.931958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.931984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.932114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.932141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.932295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.932323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.932503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.932529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.932762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.932810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.932996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.933022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.933154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.933182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.933338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.933365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.933483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.933509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.933691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.933735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.933902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.933932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.934147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.934175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.934362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.934387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.934540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.934568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.934763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.934789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.934917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.934945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.935124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.935151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.935277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.935303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.935483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.935509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.935690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.935719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.771 qpair failed and we were unable to recover it. 00:24:58.771 [2024-07-26 12:25:51.935900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.771 [2024-07-26 12:25:51.935926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.936080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.936106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.936267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.936292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.936417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.936444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.936604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.936647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.936854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.936880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.937030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.937056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.937220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.937246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.937434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.937461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.937606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.937632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.937810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.937836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.937985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.938010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.938190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.938217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.938371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.938397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.938530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.938555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.938739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.938765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.938970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.938999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.939172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.939203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.939354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.939380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.939529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.939573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.939726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.939752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.939880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.939907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.940067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.940095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.940284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.940310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.940476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.940504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.940703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.940729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.940888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.940914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.941081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.941138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.941327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.941354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.941535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.941562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.941718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.941746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.941936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.941979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.942184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.942211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.942368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.942410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.942613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.942639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.772 [2024-07-26 12:25:51.942793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.772 [2024-07-26 12:25:51.942819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.772 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.942941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.942967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.943091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.943118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.943298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.943323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.943500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.943526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.943714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.943740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.943894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.943920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.944075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.944102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.944258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.944284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.944477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.944504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.944754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.944808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.945014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.945040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.945181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.945207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.945364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.945390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.945571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.945597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.945747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.945773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.945923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.945949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.946117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.946146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.946325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.946351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.946504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.946531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.946700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.946726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.946856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.946882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.947041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.947076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.947236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.947262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.947396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.947422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.947555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.947581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.947732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.947757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.947917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.947942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.948067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.948094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.948226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.948252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.948406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.948432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.948563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.948589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.948769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.948795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.948981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.949007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.949140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.949167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.949295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.773 [2024-07-26 12:25:51.949321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.773 qpair failed and we were unable to recover it. 00:24:58.773 [2024-07-26 12:25:51.949548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.949574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.949725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.949750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.949905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.949930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.950092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.950118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.950244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.950271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.950429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.950455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.950592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.950618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.950783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.950824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.951001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.951027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.951198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.951225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.951381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.951407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.951557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.951582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.951743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.951769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.951938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.951978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.952144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.952173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.952356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.952383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.952514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.952540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.952729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.952756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.952910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.952935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.953081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.953107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.953305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.953335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.953516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.953543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.953696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.953722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.953898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.953923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.954079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.954106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.954295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.954322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.954479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.954509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.954690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.954715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.954875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.954903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.955081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.955108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.955264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.955290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.955414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.955439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.955656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.955681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.955839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.955866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.956016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.956042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.774 [2024-07-26 12:25:51.956181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.774 [2024-07-26 12:25:51.956206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.774 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.956364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.956390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.956542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.956567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.956727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.956752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.956913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.956940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.957124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.957151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.957324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.957350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.957501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.957527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.957683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.957709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.957880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.957906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.958086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.958112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.958267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.958293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.958448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.958475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.958607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.958634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.958804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.958833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.959038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.959073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.959218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.959244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.959400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.959425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.959578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.959605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.959761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.959787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.959966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.959991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.960105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.960131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.960264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.960291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.960428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.960454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.960609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.960635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.960816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.960842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.960962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.960988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.961144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.961171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.961327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.775 [2024-07-26 12:25:51.961352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.775 qpair failed and we were unable to recover it. 00:24:58.775 [2024-07-26 12:25:51.961528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.961554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.961714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.961740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.961866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.961897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.962087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.962113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.962269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.962295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.962421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.962447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.962577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.962603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.962802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.962831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.963039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.963073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.963197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.963223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.963407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.963433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.963585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.963611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.963741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.963784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.963960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.963985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.964137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.964164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.964294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.964320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.964503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.964530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.964663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.964689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.964839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.964865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.965041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.965076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.965208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.965233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.965416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.965442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.965614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.965643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.965843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.965870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.966021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.966075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.966220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.966249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.966419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.966446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.966589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.966616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.966746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.966772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.966958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.966985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.967118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.967144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.776 [2024-07-26 12:25:51.967321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.776 [2024-07-26 12:25:51.967347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.776 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.967499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.967525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.967679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.967705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.967881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.967924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.968110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.968136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.968269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.968295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.968450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.968477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.968606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.968632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.968786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.968811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.968940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.968968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.969154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.969181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.969356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.969390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.969570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.969596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.969790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.969816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.969973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.969999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.970179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.970205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.970357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.970382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.970549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.970579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.970746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.970776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.970953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.970979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.971116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.971143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.971322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.971349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.971504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.971529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.971683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.971709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.971835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.971861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.972029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.972056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.972242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.972271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.972445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.972473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.972621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.972648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.972804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.972846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.973016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.973044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.973223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.973249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.973450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.973480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.973683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.973709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.973889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.973915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.974033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.777 [2024-07-26 12:25:51.974065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.777 qpair failed and we were unable to recover it. 00:24:58.777 [2024-07-26 12:25:51.974193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.974220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.974379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.974406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.974559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.974598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.974760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.974789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.974919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.974945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.975127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.975155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.975290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.975316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.975514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.975540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.975746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.975804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.975973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.976003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.976162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.976190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.976399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.976428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.976595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.976624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.976790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.976817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.977003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.977029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.977218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.977252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.977431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.977458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.977607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.977634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.977809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.977839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.977989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.978016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.978163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.978190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.978377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.978403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.978533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.978559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.978718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.978745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.978893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.978920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.979072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.979099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.979256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.979282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.979436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.979463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.979597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.979623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.979794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.979823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.979947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.979990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.980171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.980198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.980354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.980380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.980560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.980586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.980746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.980772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.778 [2024-07-26 12:25:51.980900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.778 [2024-07-26 12:25:51.980926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.778 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.981097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.981123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.981278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.981304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.981423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.981450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.981660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.981688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.981890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.981916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.982076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.982103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.982263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.982290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.982441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.982467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.982658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.982706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.982857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.982883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.983013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.983039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.983196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.983222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.983385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.983413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.983594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.983620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.983774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.983800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.983980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.984005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.984158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.984185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.984311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.984337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.984509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.984537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.984734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.984764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.984888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.984914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.985093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.985119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.985251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.985277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.985396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.985422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.985621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.985646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.985768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.985793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.985955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.985980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.986161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.986188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.986344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.986370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.986526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.986552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.986706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.986732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.986914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.986940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.987100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.987127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.987304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.987332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.779 [2024-07-26 12:25:51.987509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.779 [2024-07-26 12:25:51.987535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.779 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.987692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.987718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.987930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.987956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.988086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.988113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.988268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.988294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.988437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.988466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.988645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.988827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.988852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.988973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.988999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.989153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.989179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.989331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.989358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.989512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.989538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.989721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.989747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.989936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.989962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.990084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.990111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.990287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.990313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.990503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.990530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.990685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.990712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.990892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.990918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.991044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.991075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.991274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.991301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.991423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.991449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.991614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.991640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.991795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.991821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.991977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.992003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.992178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.992223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.992358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.992385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.992566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.992592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.992725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.992752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.992946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.992975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.780 [2024-07-26 12:25:51.993165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.780 [2024-07-26 12:25:51.993192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.780 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.993319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.993346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.993566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.993595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.993736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.993764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.993911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.993938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.994089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.994116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.994249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.994276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.994460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.994486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.994639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.994666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.994854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.994881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.995040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.995074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.995249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.995278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.995457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.995484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.995711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.995768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.995959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.995988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.996170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.996197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.996355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.996381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.996506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.996532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.996659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.996685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.996824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.996853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.997053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.997085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.997246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.997271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.997399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.997430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.997649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.997675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.997804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.997830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.997988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.998016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.998204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.998231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.998383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.998409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.998559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.998585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.998740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.998768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.998921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.998948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.999099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.999126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.999279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.999306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.999460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.781 [2024-07-26 12:25:51.999486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.781 qpair failed and we were unable to recover it. 00:24:58.781 [2024-07-26 12:25:51.999608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:51.999649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:51.999853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:51.999882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.000068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.000095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.000274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.000317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.000480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.000508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.000664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.000689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.000824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.000851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.000999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.001026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.001211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.001238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.001393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.001418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.001578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.001603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.001747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.001774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.001951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.001976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.002167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.002194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.002339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.002365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.002524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.002551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.002730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.002756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.002942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.002968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.003088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.003132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.003286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.003313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.003438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.003464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.003610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.003636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.003789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.003816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.003999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.004025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.004224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.004263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.004426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.004454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.004610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.004636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.004772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.004799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.004954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.004987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.005120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.005147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.005272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.005298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.005454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.005480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.005635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.005662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.005796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.005823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.005970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.782 [2024-07-26 12:25:52.005995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.782 qpair failed and we were unable to recover it. 00:24:58.782 [2024-07-26 12:25:52.006131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.006158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.006287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.006313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.006445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.006471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.006622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.006648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.006848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.006877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.007022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.007050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.007232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.007259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.007424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.007450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.007618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.007648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.007854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.007880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.008040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.008089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.008248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.008274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.008433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.008460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.008631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.008698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.008870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.008895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.009031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.009057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.009218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.009244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.009370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.009396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.009559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.009585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.009759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.009789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.009967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.009996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.010171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.010197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:58.783 [2024-07-26 12:25:52.010350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:58.783 [2024-07-26 12:25:52.010375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:58.783 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.010527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.010554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.010677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.010704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.010863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.010889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.011040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.011091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.011285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.011311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.011496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.011523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.011688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.011714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.011843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.011869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.011997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.012023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.012158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.012184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.012337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.012368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.012519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.012545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.012698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.012724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.012906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.012933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.013070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.013097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.013275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.013300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.013457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.013482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.013610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.013636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.013787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.013815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.014044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.014090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.014266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.014292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.014444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.014487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.014666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.014693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.014876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.014919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.015084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.015111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.015301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.015327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.015482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.015508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.015683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.015709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.015842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.064 [2024-07-26 12:25:52.015868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.064 qpair failed and we were unable to recover it. 00:24:59.064 [2024-07-26 12:25:52.016018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.016076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.016250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.016279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.016451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.016477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.016638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.016665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.016817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.016843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.017023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.017049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.017283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.017309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.017440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.017466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.017647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.017673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.017839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.017866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.018039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.018085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.018236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.018262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.018420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.018447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.018666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.018691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.018825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.018850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.019004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.019031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.019196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.019223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.019402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.019428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.019584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.019609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.019789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.019816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.019942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.019967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.020188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.020218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.020347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.020372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.020554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.020580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.020735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.020761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.020892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.020918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.021076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.021102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.021233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.021259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.021440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.021465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.021629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.021655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.021788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.021815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.021952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.021977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.022133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.022161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.022288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.022315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.065 [2024-07-26 12:25:52.022491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.065 [2024-07-26 12:25:52.022517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.065 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.022700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.022726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.022876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.022902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.023050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.023085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.023312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.023339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.023530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.023559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.023765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.023790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.023954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.023980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.024138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.024165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.024330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.024355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.024536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.024562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.024692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.024719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.024902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.024946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.025160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.025187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.025319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.025346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.025505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.025532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.025688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.025714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.025895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.025922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.026049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.026089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.026270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.026296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.026454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.026479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.026676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.026704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.026856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.026882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.027036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.027069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.027247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.027276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.027470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.027496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.027624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.027650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.027829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.027878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.028086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.028114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.028312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.028341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.028511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.028536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.028699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.028725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.028877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.028904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.029085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.029111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.029264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.029290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.029419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.066 [2024-07-26 12:25:52.029445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.066 qpair failed and we were unable to recover it. 00:24:59.066 [2024-07-26 12:25:52.029575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.029600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.029789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.029814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.029964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.030006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.030192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.030221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.030400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.030426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.030585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.030611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.030772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.030798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.030932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.030961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.031159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.031189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.031367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.031393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.031549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.031576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.031729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.031755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.031908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.031935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.032085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.032112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.032261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.032287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.032445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.032470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.032624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.032650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.032770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.032795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.032921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.032947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.033131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.033158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.033315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.033358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.033528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.033558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.033761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.033788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.033967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.033993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.034125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.034152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.034303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.034329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.034512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.034539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.034755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.034781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.034932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.034958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.035192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.035231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.035411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.035438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.035564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.035596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.035778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.035804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.035933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.035959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.036122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.036149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.067 [2024-07-26 12:25:52.036336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.067 [2024-07-26 12:25:52.036361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.067 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.036513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.036539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.036660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.036686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.036809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.036835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.036994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.037020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.037154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.037180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.037331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.037357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.037578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.037604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.037763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.037790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.037946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.037972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.038127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.038154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.038274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.038300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.038478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.038522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.038691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.038720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.038900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.038927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.039079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.039114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.039290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.039316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.039475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.039502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.039652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.039679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.039809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.039835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.040007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.040036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.040225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.040252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.040420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.040447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.040614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.040641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.040771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.040797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.040980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.041007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.041168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.041195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.041352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.041378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.041534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.041560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.041712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.041739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.041906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.041933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.042121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.042151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.042304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.042341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.068 qpair failed and we were unable to recover it. 00:24:59.068 [2024-07-26 12:25:52.042521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.068 [2024-07-26 12:25:52.042548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.042700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.042726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.042907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.042934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.043116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.043147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.043330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.043363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.043488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.043515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.043674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.043701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.043873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.043902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.044045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.044077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.044270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.044296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.044461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.044488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.044639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.044665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.044823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.044849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.045029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.045056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.045218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.045244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.045423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.045452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.045616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.045645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.045859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.045886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.046053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.046085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.046239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.046264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.046455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.046481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.046636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.046663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.046791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.046817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.047033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.047065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.047252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.047278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.047473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.047499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.047631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.047657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.047834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.047860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.069 [2024-07-26 12:25:52.048006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.069 [2024-07-26 12:25:52.048035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.069 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.048211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.048237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.048420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.048464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.048636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.048668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.048821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.048849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.049030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.049056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.049222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.049249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.049411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.049438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.049590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.049617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.049775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.049803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.049993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.050020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.050202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.050248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.050436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.050464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.050620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.050647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.050777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.050805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.050955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.050987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.051128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.051156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.051317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.051354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.051523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.051556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.051736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.051773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.051895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.051920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.052054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.052091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.052250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.052276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.052434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.052463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.052640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.052670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.052872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.052899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.053026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.053051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.053246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.053272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.053401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.053428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.053582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.053624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.053801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.053829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.053947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.053973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.054124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.054151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.054324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.054353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.054558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.054585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.070 qpair failed and we were unable to recover it. 00:24:59.070 [2024-07-26 12:25:52.054763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.070 [2024-07-26 12:25:52.054790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.054992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.055022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.055243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.055269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.055425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.055452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.055619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.055648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.055801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.055840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.056028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.056054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.056221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.056248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.056400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.056427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.056585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.056611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.056747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.056773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.056922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.056948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.057081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.057108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.057296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.057323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.057455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.057481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.057637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.057663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.057856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.057882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.057998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.058024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.058150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.058176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.058305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.058331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.058508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.058538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.058681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.058709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.058889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.058916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.059101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.059128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.059283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.059309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.059501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.059527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.059686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.059712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.059888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.059917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.060106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.060133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.060287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.060313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.060492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.060518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.060663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.071 [2024-07-26 12:25:52.060689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.071 qpair failed and we were unable to recover it. 00:24:59.071 [2024-07-26 12:25:52.060870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.060896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.061019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.061045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.061216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.061243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.061401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.061426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.061552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.061578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.061710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.061736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.061870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.061896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.062076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.062103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.062257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.062283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.062440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.062466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.062623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.062649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.062804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.062830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.062974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.063000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.063159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.063203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.063349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.063380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.063538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.063566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.063726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.063753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.063910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.063936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.064071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.064098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.064251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.064278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.064458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.064485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.064643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.064669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.064801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.064828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.064960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.064987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.065168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.065196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.065352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.065379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.065507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.065533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.065682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.065709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.065863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.065894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.066050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.066082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.066262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.066289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.066462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.066498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.066653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.066683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.066854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.066880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.067004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.067031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.072 qpair failed and we were unable to recover it. 00:24:59.072 [2024-07-26 12:25:52.067206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.072 [2024-07-26 12:25:52.067237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.067408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.067435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.067637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.067687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.067883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.067912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.068126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.068153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.068335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.068378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.068556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.068583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.068752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.068779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.068959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.068989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.069142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.069170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.069350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.069376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.069590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.069641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.069844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.069873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.070031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.070063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.070252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.070281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.070486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.070512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.070664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.070689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.070846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.070873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.071003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.071046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.071259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.071286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.071485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.071528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.071674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.071705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.071883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.071910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.072070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.072115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.072288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.072314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.072494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.072520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.072702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.072728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.072908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.072952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.073127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.073153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.073334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.073363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.073526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.073555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.073721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.073 [2024-07-26 12:25:52.073748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.073 qpair failed and we were unable to recover it. 00:24:59.073 [2024-07-26 12:25:52.073874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.073900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.074072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.074116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.074298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.074324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.074457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.074482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.074634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.074661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.074790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.074817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.074941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.074967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.075090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.075117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.075292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.075318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.075450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.075476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.075631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.075658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.075805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.075831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.076028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.076081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.076265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.076294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.076430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.076459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.076624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.076651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.076803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.076829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.076987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.077014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.077208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.077234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.077389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.077431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.077575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.077601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.077760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.077786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.077938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.077964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.078133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.074 [2024-07-26 12:25:52.078160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.074 qpair failed and we were unable to recover it. 00:24:59.074 [2024-07-26 12:25:52.078345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.078371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.078501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.078527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.078709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.078735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.078889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.078916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.079076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.079104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.079259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.079285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.079468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.079494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.079675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.079702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.079860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.079887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.080089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.080119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.080326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.080352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.080484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.080510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.080636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.080662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.080845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.080871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.081024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.081051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.081271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.081297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.081451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.081479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.081655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.081686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.081848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.081875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.082075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.082105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.082280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.082307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.082465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.082491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.082615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.082641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.082797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.082824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.082982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.083008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.083151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.083178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.083335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.083362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.083526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.083584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.083767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.083794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.083974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.075 [2024-07-26 12:25:52.084000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.075 qpair failed and we were unable to recover it. 00:24:59.075 [2024-07-26 12:25:52.084149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.084176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.084369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.084395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.084552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.084578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.084757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.084800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.084978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.085003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.085154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.085181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.085333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.085377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.085549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.085575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.085761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.085788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.085941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.085967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.086145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.086173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.086301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.086327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.086504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.086530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.086684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.086711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.086865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.086891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.087022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.087048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.087244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.087270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.087398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.087424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.087552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.087579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.087777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.087806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.087981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.088008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.088140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.088167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.088360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.088389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.088560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.088587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.088720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.088745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.088901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.088927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.089107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.089135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.089316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.089347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.089529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.089555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.089709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.089735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.076 qpair failed and we were unable to recover it. 00:24:59.076 [2024-07-26 12:25:52.089890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.076 [2024-07-26 12:25:52.089915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.090042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.090074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.090234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.090260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.090394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.090419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.090574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.090600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.090722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.090747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.090898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.090925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.091088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.091115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.091307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.091334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.091464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.091506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.091675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.091701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.091887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.091913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.092041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.092072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.092227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.092269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.092446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.092474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.092626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.092652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.092834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.092859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.093040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.093071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.093251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.093278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.093423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.093449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.093664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.093691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.093891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.093921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.094084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.094114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.094292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.094317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.094474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.094504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.094652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.094677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.094795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.094821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.094994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.095020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.095205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.095232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.095390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.095416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.095572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.077 [2024-07-26 12:25:52.095597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.077 qpair failed and we were unable to recover it. 00:24:59.077 [2024-07-26 12:25:52.095748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.095774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.095933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.095959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.096088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.096114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.096268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.096311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.096507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.096533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.096723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.096749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.096904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.096929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.097068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.097096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.097258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.097284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.097437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.097463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.097642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.097668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.097814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.097841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.098033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.098068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.098268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.098295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.098431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.098457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.098613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.098640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.098796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.098823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.099004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.099030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.099227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.099253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.099386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.099412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.099547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.099574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.099790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.099816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.099940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.099966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.100144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.100172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.100330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.100356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.100511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.100536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.100693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.100719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.100889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.100918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.101098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.101124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.101283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.101309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.101484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.101510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.078 qpair failed and we were unable to recover it. 00:24:59.078 [2024-07-26 12:25:52.101639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.078 [2024-07-26 12:25:52.101665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.101823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.101849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.102001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.102030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.102258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.102284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.102442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.102469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.102591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.102618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.102807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.102833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.102988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.103015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.103183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.103210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.103366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.103393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.103577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.103603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.103760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.103787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.103917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.103943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.104124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.104150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.104334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.104362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.104517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.104544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.104729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.104755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.104880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.104922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.105136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.105163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.105343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.105370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.105489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.105515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.105696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.105723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.105901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.105928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.106087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.106124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.106317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.106345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.106495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.106522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.106729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.106759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.106928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.106958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.107131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.107158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.107292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.107319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.079 qpair failed and we were unable to recover it. 00:24:59.079 [2024-07-26 12:25:52.107482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.079 [2024-07-26 12:25:52.107509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.107637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.107664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.107845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.107871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.108022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.108049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.108260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.108289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.108438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.108468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.108634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.108661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.108860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.108889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.109069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.109096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.109223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.109250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.109421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.109447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.109640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.109670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.109868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.109899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.110054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.110115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.110261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.110290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.110482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.110510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.110671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.110698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.110847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.110874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.110998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.111025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.111191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.111235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.111425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.111452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.111634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.111660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.111800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.111826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.112010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.112037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.112220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.112247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.112410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.112437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.112607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.112635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.112782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.112809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.112992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.113018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.080 [2024-07-26 12:25:52.113181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.080 [2024-07-26 12:25:52.113208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.080 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.113371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.113398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.113578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.113622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.113790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.113817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.113988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.114017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.114200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.114227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.114396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.114423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.114635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.114661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.114790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.114817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.114977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.115019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.115281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.115326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.115559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.115587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.115904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.115965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.116163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.116194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.116351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.116377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.116531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.116557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.116716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.116742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.116893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.116919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.117096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.117122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.117245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.117271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.117403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.117430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.117588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.117614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.117797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.117824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.117946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.117978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.118138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.118165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.118324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.118353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.118537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.118564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.118718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.118745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.118948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.118978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.119162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.119189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.119340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.119367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.119488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.119514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.119676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.119703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.119855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.119882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.120008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.120037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.081 [2024-07-26 12:25:52.120220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.081 [2024-07-26 12:25:52.120247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.081 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.120417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.120461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.120667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.120694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.120824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.120866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.121043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.121080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.121259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.121286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.121440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.121485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.121760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.121810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.122009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.122036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.122172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.122214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.122350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.122379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.122568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.122594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.122748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.122775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.122932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.122958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.123139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.123166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.123320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.123356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.123514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.123541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.123664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.123690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.123808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.123835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.124022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.124049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.124176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.124203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.124382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.124408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.124580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.124610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.124788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.124815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.124961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.124988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.125141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.125168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.125313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.125340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.125511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.125540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.125738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.125765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.125892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.125918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.126105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.126131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.082 [2024-07-26 12:25:52.126291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.082 [2024-07-26 12:25:52.126328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.082 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.126505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.126531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.126678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.126705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.126840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.126867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.127048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.127083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.127259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.127285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.127465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.127506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.127707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.127734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.127913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.127939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.128101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.128127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.128255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.128281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.128412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.128439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.128597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.128639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.128842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.128869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.129026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.129052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.129216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.129242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.129399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.129427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.129562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.129589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.129713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.129739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.129860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.129887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.130074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.130111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.130291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.130326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.130573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.130628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.130848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.130877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.131070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.131120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.131283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.131317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.131454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.131481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.131633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.131660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.131815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.131842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.132009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.132037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.132247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.132275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.132482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.132512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.132695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.132722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.132870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.132897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.133071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.083 [2024-07-26 12:25:52.133112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.083 qpair failed and we were unable to recover it. 00:24:59.083 [2024-07-26 12:25:52.133288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.133328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.133490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.133519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.133697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.133724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.133858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.133884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.134074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.134113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.134238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.134264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.134393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.134420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.134580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.134606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.134777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.134807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.134989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.135016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.135183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.135209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.135362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.135388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.135540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.135583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.135788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.135815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.135969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.135995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.136186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.136211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.136372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.136399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.136562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.136606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.136804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.136831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.136968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.137004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.137212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.137242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.137421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.137448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.137602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.137630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.137786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.137812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.137968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.137996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.138158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.138185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.138339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.138366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.138492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.138519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.138672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.138700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.138882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.138909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.139069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.139110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.139262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.139291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.139471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.139498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.139654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.139682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.139838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.139865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.084 [2024-07-26 12:25:52.140036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.084 [2024-07-26 12:25:52.140074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.084 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.140282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.140312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.140451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.140481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.140660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.140686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.140841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.140867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.141048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.141082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.141234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.141261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.141415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.141441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.141591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.141632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.141852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.141878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.142065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.142092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.142250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.142277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.142442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.142468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.142598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.142625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.142807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.142837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.143013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.143041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.143213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.143257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.143434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.143462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.143649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.143676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.143859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.143886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.144014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.144041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.144202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.144243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.144399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.144429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.144624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.144650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.144896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.144946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.145153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.145181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.145312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.145355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.145536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.145562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.145784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.145833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.146007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.146036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.085 [2024-07-26 12:25:52.146220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.085 [2024-07-26 12:25:52.146250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.085 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.146463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.146489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.146624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.146651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.146847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.146877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.147050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.147083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.147279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.147322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.147564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.147590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.147749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.147776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.147944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.147973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.148145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.148172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.148367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.148396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.148564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.148594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.148760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.148790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.148954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.148981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.149111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.149154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.149318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.149347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.149550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.149576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.149725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.149751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.149930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.149973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.150146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.150175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.150348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.150376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.150562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.150588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.150720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.150748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.150904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.150930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.151123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.151153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.151304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.151343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.151500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.151527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.151702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.151731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.151926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.151954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.152103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.152130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.152298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.152353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.152524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.152553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.152754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.152784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.152954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.152980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.153152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.153181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.086 [2024-07-26 12:25:52.153329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.086 [2024-07-26 12:25:52.153358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.086 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.153552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.153581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.153764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.153791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.153915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.153944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.154145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.154175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.154375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.154408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.154588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.154615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.154813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.154844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.155014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.155046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.155261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.155290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.155492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.155522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.155681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.155708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.155867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.155895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.156107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.156138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.156305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.156339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.156495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.156523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.156679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.156726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.156940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.156967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.157121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.157148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.157305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.157334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.157514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.157544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.157680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.157712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.157907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.157936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.158095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.158122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.158301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.158331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.158502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.158534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.158738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.158768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.158923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.158950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.159121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.159148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.159302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.159348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.159520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.159550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.159720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.159747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.159919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.159948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.160097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.087 [2024-07-26 12:25:52.160130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.087 qpair failed and we were unable to recover it. 00:24:59.087 [2024-07-26 12:25:52.160322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.160351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.160507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.160533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.160690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.160717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.160929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.160974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.161139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.161172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.161359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.161386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.161564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.161605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.161775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.161804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.161975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.162008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.162157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.162184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.162358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.162388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.162676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.162726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.162900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.162931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.163104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.163132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.163285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.163311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.163499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.163528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.163703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.163745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.163902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.163929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.164085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.164113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.164286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.164314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.164456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.164488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.164636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.164663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.164800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.164827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.164982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.165034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.165221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.165256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.165412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.165439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.165582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.165611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.165752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.165781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.165984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.166014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.166175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.166202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.166354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.166384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.166584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.166611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.166778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.166807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.166958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.166984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.088 [2024-07-26 12:25:52.167163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.088 [2024-07-26 12:25:52.167194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.088 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.167338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.167367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.167537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.167568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.167742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.167769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.167894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.167938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.168107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.168138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.168281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.168320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.168480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.168506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.168667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.168719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.168921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.168962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.169138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.169171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.169373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.169400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.169532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.169561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.169691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.169717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.169845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.169872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.170026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.170056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.170246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.170272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.170454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.170482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.170646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.170675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.170852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.170878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.171054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.171092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.171306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.171331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.171482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.171513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.171646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.171673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.171802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.171828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.171981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.172008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.172207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.172237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.172412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.172439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.172587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.172613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.172740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.172767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.172899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.172928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.173116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.173143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.173265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.173291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.173476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.173503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.173671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.173698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.173833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.173859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.089 qpair failed and we were unable to recover it. 00:24:59.089 [2024-07-26 12:25:52.174030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.089 [2024-07-26 12:25:52.174080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.174252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.174284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.174461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.174492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.174661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.174690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.174822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.174867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.175027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.175056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.175219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.175247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.175406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.175434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.175581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.175610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.175787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.175813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.175943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.175970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.176132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.176161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.176338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.176368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.176520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.176553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.176709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.176737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.176944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.176971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.177099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.177127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.177264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.177290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.177496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.177526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.177701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.177728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.177852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.177879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.178028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.178055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.178226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.178253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.178381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.178408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.178536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.178563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.178725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.178752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.178912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.178963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.179140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.179167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.179317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.179346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.179510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.090 [2024-07-26 12:25:52.179540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.090 qpair failed and we were unable to recover it. 00:24:59.090 [2024-07-26 12:25:52.179722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.179752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.179939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.179966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.180118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.180159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.180335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.180367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.180498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.180527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.180675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.180702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.180865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.180914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.181123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.181153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.181301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.181342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.181527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.181554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.181693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.181720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.181872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.181916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.182128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.182157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.182290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.182316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.182484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.182513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.182672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.182702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.182850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.182879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.183065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.183091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.183224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.183267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.183448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.183484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.183666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.183696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.183869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.183896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.184031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.184057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.184254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.184310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.184460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.184490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.184680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.184708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.184892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.184922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.185100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.185128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.185330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.185360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.185536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.185566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.185718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.185761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.185925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.185954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.186109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.186140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.186319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.186349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.186488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.186515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.091 [2024-07-26 12:25:52.186697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.091 [2024-07-26 12:25:52.186724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.091 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.186911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.186948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.187128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.187155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.187335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.187364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.187505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.187538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.187677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.187710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.187873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.187911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.188109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.188139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.188332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.188365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.188527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.188559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.188758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.188786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.188935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.188976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.189146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.189179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.189352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.189383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.189563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.189590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.189778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.189805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.190002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.190032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.190211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.190239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.190390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.190417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.190593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.190624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.190756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.190787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.190958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.190988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.191154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.191181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.191324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.191350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.191524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.191554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.191748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.191779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.191977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.192006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.192179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.192206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.192350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.192378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.192531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.192558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.192728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.192758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.192984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.193013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.193214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.193242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.193424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.193453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.193597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.193627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.193799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.092 [2024-07-26 12:25:52.193826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.092 qpair failed and we were unable to recover it. 00:24:59.092 [2024-07-26 12:25:52.194002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.194032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.194265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.194303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.194487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.194520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.194698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.194725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.194885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.194955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.195101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.195136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.195313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.195344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.195496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.195523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.195681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.195710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.195884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.195915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.196057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.196104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.196280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.196306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.196481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.196507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.196641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.196668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.196818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.196847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.196988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.197024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.197205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.197235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.197384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.197414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.197585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.197617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.197791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.197818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.198004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.198036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.198219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.198249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.198417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.198446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.198629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.198665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.198807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.198842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.198978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.199005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.199135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.199163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.199292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.199319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.199485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.199533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.199760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.199806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.200004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.200033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.200214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.200243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.200378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.200405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.200562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.200588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.200773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.200804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.200972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.093 [2024-07-26 12:25:52.201000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.093 qpair failed and we were unable to recover it. 00:24:59.093 [2024-07-26 12:25:52.201177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.201205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.201334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.201378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.201548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.201579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.201723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.201750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.201934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.201961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.202111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.202155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.202356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.202383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.202550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.202580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.202800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.202855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.203023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.203054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.203255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.203283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.203411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.203438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.203592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.203637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.203808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.203836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.203992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.204022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.204195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.204223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.204346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.204372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.204495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.204520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.204711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.204744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.204917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.204944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.205145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.205176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.205344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.205373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.205533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.205562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.205712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.205739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.205875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.205919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.206100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.206127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.206327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.206357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.206507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.206533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.206693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.206739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.206887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.206916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.207082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.207111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.207301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.207328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.207483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.207513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.207694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.207723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.207916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.207944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.094 [2024-07-26 12:25:52.208139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.094 [2024-07-26 12:25:52.208166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.094 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.208352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.208382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.208563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.208593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.208766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.208797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.208970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.208996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.209167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.209194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.209349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.209376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.209525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.209554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.209738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.209764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.209910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.209938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.210113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.210143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.210318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.210347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.210510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.210537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.210713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.210741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.210911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.210942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.211125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.211156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.211328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.211353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.211501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.211528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.211724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.211752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.211940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.211967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.212147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.212173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.212328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.212356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.212513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.212556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.212743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.212770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.212930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.212957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.213148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.213178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.213322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.213358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.213525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.213554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.213759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.213787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.213968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.213997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.214190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.095 [2024-07-26 12:25:52.214220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.095 qpair failed and we were unable to recover it. 00:24:59.095 [2024-07-26 12:25:52.214378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.214408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.214561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.214587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.214710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.214752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.214915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.214945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.215095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.215126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.215273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.215299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.215495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.215525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.215714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.215741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.215898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.215924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.216108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.216135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.216283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.216316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.216459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.216492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.216676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.216703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.216839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.216864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.217000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.217043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.217220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.217250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.217410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.217440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.217593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.217619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.217823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.217851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.218067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.218105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.218262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.218289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.218468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.218497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.218668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.218699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.218897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.218926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.219148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.219178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.219363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.219388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.219519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.219546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.219685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.219712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.219889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.219919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.220114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.220141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.220291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.220326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.220523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.220551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.220747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.220778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.220927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.220954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.221142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.221171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.221370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.096 [2024-07-26 12:25:52.221404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.096 qpair failed and we were unable to recover it. 00:24:59.096 [2024-07-26 12:25:52.221586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.221615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.221775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.221803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.221931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.221960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.222120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.222164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.222339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.222366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.222517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.222544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.222750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.222782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.222934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.222963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.223157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.223185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.223368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.223395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.223524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.223551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.223735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.223761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.223945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.223975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.224163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.224190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.224328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.224360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.224527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.224570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.224751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.224777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.224932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.224961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.225146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.225178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.225342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.225371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.225553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.225579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.225742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.225770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.225919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.225952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.226163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.226194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.226345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.226374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.226531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.226558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.226732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.226762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.226906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.226937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.227111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.227141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.227292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.227327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.227462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.227507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.227713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.227743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.227965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.227994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.228168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.228197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.228360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.228386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.228555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.097 [2024-07-26 12:25:52.228584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.097 qpair failed and we were unable to recover it. 00:24:59.097 [2024-07-26 12:25:52.228769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.228798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.228925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.228950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.229114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.229307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.229344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.229511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.229541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.229746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.229773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.229946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.229975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.230166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.230193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.230319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.230370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.230546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.230573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.230724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.230770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.230945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.230975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.231153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.231180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.231357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.231385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.231552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.231582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.231780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.231810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.231989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.232016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.232192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.232219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.232400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.232435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.232607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.232637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.232837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.232865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.232993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.233020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.233229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.233259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.233416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.233446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.233611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.233640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.233816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.233845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.234000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.234043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.234222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.234253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.234434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.234465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.234615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.234644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.234805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.234833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.234992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.235020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.235231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.235259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.098 qpair failed and we were unable to recover it. 00:24:59.098 [2024-07-26 12:25:52.235416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.098 [2024-07-26 12:25:52.235444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.235571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.235598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.235740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.235767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.235977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.236006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.236151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.236181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.236336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.236364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.236554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.236584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.236751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.236784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.236960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.236988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.237126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.237152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.237307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.237345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.237501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.237545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.237703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.237730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.237909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.237955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.238123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.238155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.238311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.238345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.238500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.238527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.238703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.238732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.238882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.238912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.239087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.239130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.239305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.239339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.239475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.239505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.239671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.239701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.239865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.239893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.240074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.240108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.240244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.240277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.240439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.240486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.240679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.240709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.240884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.240911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.241038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.241073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.241197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.241225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.241349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.241377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.241533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.241563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.241689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.241717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.099 [2024-07-26 12:25:52.241892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.099 [2024-07-26 12:25:52.241938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.099 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.242151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.242178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.242305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.242345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.242510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.242537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.242661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.242691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.242862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.242910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.243094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.243121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.243295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.243328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.243497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.243529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.243681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.243710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.243912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.243939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.244142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.244176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.244320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.244363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.244512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.244542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.244685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.244712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.244843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.244873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.245073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.245104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.245252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.245282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.245490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.245517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.245732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.245762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.245936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.245966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.246111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.246141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.246294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.246321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.246486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.246513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.246643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.246669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.246826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.246869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.247046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.247079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.247234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.247284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.247484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.247513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.247654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.247686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.247860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.247887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.248030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.100 [2024-07-26 12:25:52.248078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.100 qpair failed and we were unable to recover it. 00:24:59.100 [2024-07-26 12:25:52.248252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.248281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.248490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.248517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.248676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.248702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.248873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.248903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.249067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.249098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.249235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.249264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.249400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.249427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.249581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.249626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.249798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.249833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.250001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.250029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.250194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.250222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.250399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.250426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.250579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.250623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.250810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.250839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.250993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.251025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.251223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.251250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.251408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.251434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.251571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.251614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.251816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.251843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.251991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.252019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.252205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.252232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.252427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.252457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.252610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.252637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.252834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.252864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.253001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.253030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.253206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.253236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.253397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.253426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.253557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.253585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.253739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.253769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.253937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.253966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.254131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.254160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.254365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.254394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.254561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.254591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.254742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.254771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.254946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.254973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.101 [2024-07-26 12:25:52.255154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.101 [2024-07-26 12:25:52.255185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.101 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.255366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.255393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.255576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.255603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.255768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.255796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.255995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.256027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.256223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.256253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.256397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.256427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.256633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.256660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.256826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.256858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.257043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.257076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.257233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.257259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.257389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.257417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.257587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.257617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.257773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.257801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.257978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.258004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.258188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.258215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.258366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.258399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.258601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.258629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.258784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.258812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.258976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.259003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.259176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.259206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.259353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.259383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.259565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.259592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.259738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.259765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.259890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.259938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.260134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.260162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.260311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.260354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.260533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.260560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.260713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.260742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.260919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.260950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.261118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.261148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.261342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.261372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.261552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.261583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.261739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.261765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.261887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.261916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.262075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.262104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.102 [2024-07-26 12:25:52.262286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.102 [2024-07-26 12:25:52.262317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.102 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.262497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.262524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.262699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.262732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.262906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.262933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.263089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.263134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.263324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.263353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.263515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.263542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.263673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.263703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.263863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.263916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.264119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.264150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.264328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.264358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.264519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.264545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.264718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.264748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.264920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.264950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.265122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.265149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.265313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.265341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.265464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.265491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.265645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.265672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.265860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.265893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.266036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.266070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.266206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.266250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.266426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.266455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.266596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.266626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.266809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.266838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.267016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.267046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.267238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.267265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.267409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.267437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.267557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.267585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.267715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.267759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.267950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.267994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.268157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.268188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.268344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.268373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.268500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.268526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.268706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.268737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.268893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.268921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.269086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.269114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.103 qpair failed and we were unable to recover it. 00:24:59.103 [2024-07-26 12:25:52.269311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.103 [2024-07-26 12:25:52.269343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.269490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.269521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.269722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.269753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.269925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.269955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.270108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.270135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.270288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.270332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.270501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.270530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.270676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.270706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.270841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.270884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.271066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.271093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.271256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.271285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.271409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.271438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.271618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.271653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.271820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.271851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.272039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.272089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.272252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.272279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.272440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.272470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.272642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.272671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.272841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.272871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.273044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.273079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.273262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.273291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.273464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.273496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.273690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.273720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.273881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.273908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.274073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.274121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.274265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.274299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.274503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.274532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.274702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.274729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.274863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.274891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.275073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.275101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.104 qpair failed and we were unable to recover it. 00:24:59.104 [2024-07-26 12:25:52.275257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.104 [2024-07-26 12:25:52.275287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.275443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.275473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.275611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.275638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.275781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.275811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.276005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.276034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.276212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.276241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.276417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.276447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.276588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.276618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.276776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.276806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.277008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.277038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.277189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.277216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.277370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.277397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.277598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.277628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.277812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.277839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.277975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.278019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.278225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.278255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.278389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.278417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.278600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.278628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.278755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.278781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.278911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.278938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.279126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.279159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.279312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.279341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.279466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.279517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.279666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.279695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.279875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.279905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.280075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.280103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.280240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.280288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.280479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.280509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.280675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.280705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.280876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.280903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.281092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.281124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.105 qpair failed and we were unable to recover it. 00:24:59.105 [2024-07-26 12:25:52.281267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.105 [2024-07-26 12:25:52.281297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.281490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.281522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.281728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.281755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.281907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.281938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.282106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.282137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.282292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.282322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.282482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.282511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.282688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.282718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.282886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.282917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.283119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.283153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.283357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.283383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.283516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.283543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.283701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.283728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.283921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.283948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.284103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.284131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.284309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.284339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.284490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.284521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.284652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.284682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.284870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.284901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.285098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.285129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.285297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.285327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.285531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.285561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.285741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.285768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.285913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.285944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.286149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.286176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.286333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.286377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.286587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.286614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.286766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.286795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.286967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.286997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.287168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.287199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.287376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.287404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.287554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.287586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.287747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.287792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.287971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.287997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.288134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.106 [2024-07-26 12:25:52.288162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.106 qpair failed and we were unable to recover it. 00:24:59.106 [2024-07-26 12:25:52.288337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.288368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.288558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.288588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.288780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.288810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.288995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.289022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.289183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.289210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.289330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.289356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.289516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.289543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.289694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.289720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.289888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.289918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.290114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.290145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.290314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.290341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.290500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.290527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.290677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.290704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.290889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.290916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.291072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.291118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.291317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.291344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.291521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.291551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.291722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.291753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.291916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.291946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.292087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.292115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.292272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.292317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.292488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.292518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.292714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.292743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.292906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.292934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.293065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.293092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.293260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.293286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.293421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.293448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.293629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.293655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.293830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.293859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.294008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.294037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.294196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.294224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.294374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.294401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.294572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.294602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.294743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.294772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.294941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.294970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.107 qpair failed and we were unable to recover it. 00:24:59.107 [2024-07-26 12:25:52.295124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.107 [2024-07-26 12:25:52.295152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.108 [2024-07-26 12:25:52.295325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.108 [2024-07-26 12:25:52.295359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.108 [2024-07-26 12:25:52.295562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.108 [2024-07-26 12:25:52.295589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.108 [2024-07-26 12:25:52.295777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.108 [2024-07-26 12:25:52.295807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.108 [2024-07-26 12:25:52.295980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.108 [2024-07-26 12:25:52.296007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.108 [2024-07-26 12:25:52.296214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.108 [2024-07-26 12:25:52.296244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.108 [2024-07-26 12:25:52.296407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.108 [2024-07-26 12:25:52.296438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.108 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.296575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.296605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.296745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.296772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.296923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.296972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.297125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.297155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.297326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.297356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.297530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.297558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.297736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.297766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.394 qpair failed and we were unable to recover it. 00:24:59.394 [2024-07-26 12:25:52.297900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.394 [2024-07-26 12:25:52.297930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.298106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.298135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.298339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.298365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.298493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.298519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.298677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.298720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.298887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.298916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.299055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.299088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.299241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.299267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.299436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.299466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.299639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.299668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.299862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.299888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.300010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.300036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.300182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.300222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.300401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.300430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.300564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.300592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.300857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.300918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.301089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.301132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.301282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.301309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.301460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.301486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.301712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.301764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.301932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.301961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.302128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.302157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.302305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.302331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.302480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.302506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.302715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.302744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.302880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.302909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.303068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.303095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.303290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.303320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.303525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.303555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.303728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.303755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.303907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.303934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.304073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.304100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.304253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.304297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.304467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.304498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.304697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.395 [2024-07-26 12:25:52.304724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.395 qpair failed and we were unable to recover it. 00:24:59.395 [2024-07-26 12:25:52.304916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.304945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.305126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.305153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.305307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.305355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.305531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.305557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.305681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.305726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.305897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.305926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.306070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.306100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.306281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.306308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.306454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.306480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.306613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.306639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.306797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.306840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.307011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.307037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.307164] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21cd230 is same with the state(5) to be set 00:24:59.396 [2024-07-26 12:25:52.307441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.307515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.307723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.307752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.307935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.307963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.308173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.308201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.308355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.308382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.308561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.308588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.308826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.308875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.309055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.309110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.309267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.309294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.309457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.309484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.309639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.309666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.309790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.309817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.310023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.310053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.310259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.310288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.310465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.310491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.310671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.310715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.310906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.310935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.311112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.311139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.311315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.311343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.311547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.311573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.311755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.311782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.311905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.311931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.312130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.396 [2024-07-26 12:25:52.312157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.396 qpair failed and we were unable to recover it. 00:24:59.396 [2024-07-26 12:25:52.312335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.312360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.312512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.312539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.312743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.312772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.312965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.312991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.313197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.313227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.313361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.313396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.313552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.313580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.313738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.313765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.313896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.313923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.314117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.314144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.314276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.314307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.314493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.314537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.314725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.314752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.314951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.314999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.315185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.315216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.315370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.315396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.315608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.315657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.315857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.315906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.316105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.316131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.316283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.316313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.316508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.316557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.316720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.316746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.316922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.316951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.317135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.317161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.317318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.317342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.317469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.317512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.317747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.317795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.317973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.317997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.318182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.318210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.318380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.318407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.318558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.318582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.318735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.318775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.318949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.318976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.319178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.319214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.319462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.319500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.319680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.397 [2024-07-26 12:25:52.319707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.397 qpair failed and we were unable to recover it. 00:24:59.397 [2024-07-26 12:25:52.319916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.319941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.320149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.320184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.320357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.320386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.320546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.320575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.320751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.320782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.320959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.320987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.321175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.321211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.321357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.321386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.321532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.321562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.321728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.321759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.321941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.321971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.322140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.322170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.322303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.322342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.322533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.322563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.322768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.322801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.322944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.322974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.323121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.323162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.323356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.323384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.323526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.323555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.323746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.323779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.323938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.323968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.324128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.324157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.324333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.324360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.324536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.324566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.324782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.324826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.324992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.325024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.325218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.325245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.325434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.325465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.325644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.325677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.325824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.325853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.326039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.326077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.326293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.326331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.326499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.326529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.326723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.326753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.326914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.326946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.398 qpair failed and we were unable to recover it. 00:24:59.398 [2024-07-26 12:25:52.327132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.398 [2024-07-26 12:25:52.327159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.327292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.327333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.327522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.327560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.327766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.327818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.327998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.328025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.328201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.328231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.328431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.328467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.328780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.328831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.329057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.329119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.329300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.329344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.329514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.329544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.329711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.329740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.329922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.329949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.330106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.330133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.330287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.330315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.330446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.330494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.330695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.330725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.330890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.330920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.331065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.331112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.331268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.331295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.331514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.331544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.331717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.331746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.331912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.331942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.332127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.332155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.332305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.332356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.332554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.332584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.332802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.332833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.333030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.333072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.333279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.333305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.399 qpair failed and we were unable to recover it. 00:24:59.399 [2024-07-26 12:25:52.333478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.399 [2024-07-26 12:25:52.333519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.333673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.333703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.333898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.333928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.334086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.334138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.334312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.334352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.334565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.334610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.334756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.334800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.334923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.334950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.335129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.335173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.335345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.335389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.335556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.335585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.335779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.335808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.335944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.335971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.336120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.336147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.336302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.336340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.336512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.336556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.336739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.336766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.336947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.336978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.337142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.337169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.337341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.337389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.337567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.337610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.337769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.337796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.337943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.337982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.338187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.338220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.338400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.338431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.338612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.338642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.338811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.338867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.339076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.339122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.339281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.339310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.339474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.339516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.339731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.339781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.339994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.340020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.340185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.340212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.340391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.340420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.340634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.340696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.340924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.400 [2024-07-26 12:25:52.340976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.400 qpair failed and we were unable to recover it. 00:24:59.400 [2024-07-26 12:25:52.341161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.341188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.341316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.341342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.341500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.341528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.341748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.341800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.342040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.342072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.342207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.342237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.342385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.342411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.342667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.342715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.342915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.342944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.343157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.343183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.343361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.343387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.343564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.343594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.343736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.343765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.343933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.343963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.344124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.344151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.344334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.344360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.344521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.344611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.344778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.344807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.344999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.345029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.345210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.345236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.345415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.345444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.345763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.345818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.346041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.346073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.346240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.346266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.346466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.346495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.346630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.346667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.346866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.346895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.347109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.347136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.347266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.347292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.347448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.347475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.347703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.347757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.347930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.347959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.348151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.348177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.348307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.348344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.348529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.348559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.348752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.401 [2024-07-26 12:25:52.348781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.401 qpair failed and we were unable to recover it. 00:24:59.401 [2024-07-26 12:25:52.348951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.348980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.349127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.349154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.349309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.349338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.349503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.349532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.349752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.349814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.349985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.350015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.350196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.350222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.350396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.350422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.350597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.350626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.350827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.350856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.351049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.351082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.351237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.351263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.351424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.351453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.351634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.351681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.351850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.351879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.352022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.352051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.352238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.352265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.352469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.352498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.352626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.352655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.352849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.352876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.353085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.353127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.353307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.353353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.353554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.353580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.353750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.353778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.353909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.353948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.354101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.354127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.354284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.354311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.354453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.354479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.354653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.354680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.354825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.354856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.355028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.355071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.355239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.355265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.355444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.355473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.355636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.355664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.355835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.355861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.356019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.356046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.402 [2024-07-26 12:25:52.356210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.402 [2024-07-26 12:25:52.356236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.402 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.356393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.356419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.356590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.356621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.356816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.356845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.357029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.357067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.357274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.357303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.357494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.357524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.357669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.357695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.357844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.357870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.358051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.358088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.358269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.358295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.358440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.358483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.358672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.358698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.358877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.358903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.359109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.359139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.359276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.359304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.359482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.359509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.359662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.359706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.359854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.359882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.360087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.360130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.360281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.360307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.360494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.360523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.360700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.360727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.360862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.360889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.361048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.361109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.361287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.361313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.361467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.361493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.361632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.361661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.361838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.361865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.362069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.362109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.362304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.362340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.362518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.362549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.362731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.362760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.362928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.362956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.363164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.363191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.363326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.363352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.403 [2024-07-26 12:25:52.363508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.403 [2024-07-26 12:25:52.363551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.403 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.363697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.363723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.363852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.363878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.364057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.364101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.364297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.364328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.364481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.364529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.364722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.364751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.364893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.364919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.365121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.365150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.365354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.365381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.365531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.365557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.365733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.365762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.365922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.365952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.366130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.366156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.366315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.366347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.366511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.366537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.366686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.366712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.366850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.366879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.367018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.367047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.367216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.367242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.367390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.367417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.367592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.367618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.367804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.367831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.368024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.368054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.368207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.368233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.368392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.368419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.368568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.368597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.368761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.368790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.368940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.368967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.369123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.369150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.404 [2024-07-26 12:25:52.369313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.404 [2024-07-26 12:25:52.369342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.404 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.369515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.369541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.369714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.369757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.369933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.369962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.370139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.370166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.370345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.370389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.370588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.370618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.370764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.370790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.370947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.370974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.371102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.371129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.371304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.371331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.371485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.371521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.371652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.371678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.371802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.371840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.372020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.372048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.372203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.372232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.372379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.372405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.372554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.372580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.372733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.372760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.372938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.372967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.373146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.373172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.373368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.373397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.373569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.373595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.373769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.373812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.373981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.374007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.374162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.374189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.374358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.374387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.374601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.374627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.374785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.374811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.374960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.374986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.375143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.375170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.375350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.375376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.375496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.375522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.375660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.375690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.375845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.375871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.376042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.376090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.376243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.405 [2024-07-26 12:25:52.376269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.405 qpair failed and we were unable to recover it. 00:24:59.405 [2024-07-26 12:25:52.376446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.376472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.376638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.376667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.376827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.376856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.377011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.377037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.377174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.377211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.377365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.377391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.377511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.377537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.377692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.377735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.377894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.377921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.378116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.378143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.378301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.378327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.378505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.378534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.378684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.378710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.378904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.378933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.379105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.379132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.379258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.379285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.379471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.379501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.379637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.379666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.379842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.379868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.380043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.380079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.380242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.380268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.380424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.380450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.380631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.380657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.380800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.380847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.381000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.381026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.381184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.381211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.381395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.381422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.381578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.381606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.381766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.381809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.381978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.382004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.382129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.382156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.382328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.382357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.382526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.382555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.382727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.382753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.382874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.382915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.406 [2024-07-26 12:25:52.383110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.406 [2024-07-26 12:25:52.383139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.406 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.383308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.383334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.383508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.383537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.383730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.383756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.383929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.383958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.384132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.384159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.384311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.384337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.384504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.384530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.384682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.384711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.384867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.384894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.385050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.385084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.385241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.385268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.385420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.385464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.385642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.385668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.385823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.385850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.386031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.386067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.386270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.386297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.386468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.386498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.386692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.386721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.386866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.386893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.387069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.387099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.387293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.387322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.387491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.387517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.387716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.387745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.387913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.387944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.388138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.388165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.388334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.388363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.388560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.388586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.388763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.388790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.388967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.388997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.389208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.389235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.389418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.389445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.389658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.389687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.389863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.389892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.390069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.390096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.407 [2024-07-26 12:25:52.390256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.407 [2024-07-26 12:25:52.390283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.407 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.390402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.390428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.390606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.390633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.390784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.390810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.390937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.390964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.391143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.391169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.391321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.391347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.391498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.391524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.391704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.391731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.391850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.391876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.392057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.392095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.392265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.392292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.392492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.392520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.392669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.392698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.392867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.392893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.393075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.393119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.393287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.393316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.393486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.393512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.393722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.393751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.393962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.393989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.394147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.394174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.394328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.394373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.394506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.394535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.394715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.394741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.394940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.394970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.395127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.395157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.395326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.395352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.395522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.395551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.395758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.395784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.395936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.395963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.396133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.396163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.396309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.396338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.408 [2024-07-26 12:25:52.396475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.408 [2024-07-26 12:25:52.396502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.408 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.396655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.396681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.396825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.396866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.397066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.397109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.397268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.397294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.397475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.397504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.397654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.397680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.397836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.397862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.397985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.398012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.398212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.398238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.398387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.398414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.398614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.398643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.398801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.398827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.398976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.399003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.399221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.399250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.399449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.399476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.399649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.399685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.399872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.399898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.400044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.400085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.400252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.400282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.400480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.400509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.400658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.400684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.400839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.400865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.401023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.401076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.401281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.401307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.409 [2024-07-26 12:25:52.401475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.409 [2024-07-26 12:25:52.401504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.409 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.401675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.401704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.401880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.401906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.402071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.402098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.402283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.402310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.402484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.402510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.402680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.402708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.402876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.402905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.403124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.403151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.403337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.403379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.403588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.403626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.403808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.403846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.404043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.404094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.404293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.404326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.404509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.404535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.404689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.404731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.404887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.404917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.405076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.405114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.405269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.405311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.405488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.405529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.405691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.405720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.405896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.405926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.406107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.406139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.406293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.406320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.406457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.406486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.406644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.406675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.406800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.406827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.407006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.407049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.407238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.407265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.407431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.407458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.407608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.410 [2024-07-26 12:25:52.407638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.410 qpair failed and we were unable to recover it. 00:24:59.410 [2024-07-26 12:25:52.407780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.407810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.407988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.408016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.408194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.408225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.408392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.408422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.408594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.408622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.408800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.408830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.409001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.409032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.409256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.409284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.409416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.409443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.409572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.409600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.409796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.409823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.410009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.410037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.410228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.410259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.410410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.410438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.410567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.410614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.410758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.410791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.410989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.411016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.411167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.411198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.411371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.411402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.411551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.411579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.411732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.411775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.411943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.411973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.412133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.412162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.412321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.412364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.412536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.412566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.412714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.412743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.412925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.412954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.413138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.413165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.413322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.413352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.413498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.413532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.413734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.413763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.413939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.413966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.414143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.414175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.414378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.411 [2024-07-26 12:25:52.414408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.411 qpair failed and we were unable to recover it. 00:24:59.411 [2024-07-26 12:25:52.414604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.414631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.414805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.414836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.414968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.414998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.415180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.415208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.415354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.415384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.415542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.415571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.415749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.415776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.415952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.415982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.416160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.416192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.416368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.416396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.416515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.416560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.416710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.416743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.416895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.416922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.417085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.417128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.417301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.417330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.417495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.417523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.417701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.417732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.417938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.417965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.418146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.418173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.418345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.418376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.418529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.418565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.418716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.418742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.418935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.418965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.419144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.419172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.419327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.419354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.419525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.419555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.419720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.419753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.419926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.419953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.420125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.420156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.420312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.420341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.420502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.420530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.420734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.412 [2024-07-26 12:25:52.420765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.412 qpair failed and we were unable to recover it. 00:24:59.412 [2024-07-26 12:25:52.420960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.420990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.421138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.421166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.421296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.421325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.421504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.421535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.421712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.421740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.421947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.421978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.422124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.422156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.422359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.422387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.422557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.422588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.422763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.422793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.422965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.422992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.423143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.423171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.423322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.423349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.423482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.423509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.423685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.423714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.423871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.423902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.424103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.424131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.424342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.424369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.424530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.424557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.424705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.424733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.424860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.424887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.425041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.425075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.425236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.425263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.425436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.425466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.425633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.425663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.425835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.425862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.426068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.426100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.426250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.426280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.426477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.426507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.426630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.426656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.426809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.426853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.427052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.427087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.427298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.413 [2024-07-26 12:25:52.427325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.413 qpair failed and we were unable to recover it. 00:24:59.413 [2024-07-26 12:25:52.427534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.427564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.427738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.427765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.427977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.428009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.428196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.428224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.428384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.428411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.428563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.428591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.428725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.428752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.428885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.428912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.429039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.429072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.429233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.429260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.429429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.429457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.429580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.429607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.429786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.429831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.430012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.430040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.430201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.430232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.430394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.430425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.430568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.430598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.430734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.430777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.430949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.430979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.431155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.431183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.431310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.431359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.431561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.431593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.431746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.431776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.431908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.431936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.432121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.432152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.432333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.432360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.432522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.432549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.432743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.432774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.432927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.432955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.433105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.433150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.433330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.433357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.433542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.433569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.433707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.433734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.433866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.414 [2024-07-26 12:25:52.433893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.414 qpair failed and we were unable to recover it. 00:24:59.414 [2024-07-26 12:25:52.434084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.434112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.434241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.434275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.434477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.434508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.434660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.434687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.434820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.434847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.434981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.435008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.435160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.435189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.435317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.435345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.435538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.435565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.435726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.435753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.435898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.435928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.436090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.436120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.436299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.436329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.436525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.436555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.436706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.436734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.436893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.436920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.437054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.437094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.437290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.437320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.437508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.437539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.437694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.437724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.437897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.437927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.438131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.438158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.438367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.438398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.438529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.438557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.438706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.438738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.415 [2024-07-26 12:25:52.438890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.415 [2024-07-26 12:25:52.438917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.415 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.439073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.439101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.439224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.439252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.439414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.439442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.439590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.439633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.439813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.439840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.439995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.440026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.440250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.440278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.440454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.440484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.440664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.440690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.440838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.440865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.440985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.441011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.441196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.441223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.441401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.441431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.441603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.441630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.441800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.441829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.442018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.442052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.442257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.442284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.442454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.442484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.442648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.442678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.442881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.442907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.443124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.443152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.443281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.443308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.443459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.443485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.443611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.443657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.443831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.443860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.444004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.444031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.444174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.444202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.444401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.444431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.444607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.444635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.444839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.444869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.445031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.445071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.416 [2024-07-26 12:25:52.445248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.416 [2024-07-26 12:25:52.445276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.416 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.445476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.445506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.445675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.445704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.445926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.445956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.446095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.446139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.446326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.446371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.446542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.446568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.446744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.446773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.446967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.446997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.447180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.447208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.447343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.447371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.447584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.447614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.447767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.447794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.447975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.448002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.448185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.448213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.448373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.448400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.448570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.448600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.448798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.448824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.448975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.449002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.449162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.449190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.449343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.449369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.449491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.449519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.449720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.449749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.449954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.449981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.450135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.450167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.450353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.450383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.450584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.450614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.450791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.450822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.450996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.451030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.451232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.451263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.451443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.451470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.451635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.451664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.451843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.451870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.417 [2024-07-26 12:25:52.452045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.417 [2024-07-26 12:25:52.452084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.417 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.452255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.452282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.452463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.452493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.452642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.452669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.452853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.452896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.453085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.453116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.453282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.453309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.453464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.453508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.453673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.453703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.453905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.453932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.454101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.454144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.454301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.454328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.454463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.454489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.454639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.454666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.454819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.454845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.455000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.455026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.455205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.455235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.455408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.455438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.455649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.455676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.455826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.455855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.456051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.456100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.456265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.456292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.456472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.456501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.456701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.456728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.456879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.456906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.457075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.457105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.457287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.457314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.457464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.457492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.457673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.457703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.457897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.457927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.418 [2024-07-26 12:25:52.458070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.418 [2024-07-26 12:25:52.458097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.418 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.458226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.458258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.458455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.458485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.458629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.458655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.458802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.458847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.459013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.459043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.459227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.459255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.459437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.459467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.459643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.459670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.459825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.459856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.460011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.460038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.460219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.460249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.460453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.460480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.460627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.460656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.460847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.460876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.461024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.461052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.461220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.461247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.461438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.461468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.461667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.461695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.461865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.461895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.462067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.462097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.462296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.462323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.462498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.462528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.462727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.462754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.462908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.462936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.463069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.463098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.463227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.463255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.463437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.463465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.463677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.463707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.463904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.463933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.464137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.464165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.464333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.464363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.464534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.419 [2024-07-26 12:25:52.464564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.419 qpair failed and we were unable to recover it. 00:24:59.419 [2024-07-26 12:25:52.464736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.464764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.464959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.464989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.465183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.465213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.465358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.465385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.465534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.465561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.465766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.465796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.466030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.466067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.466248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.466275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.466428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.466459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.466612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.466639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.466831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.466860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.466992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.467022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.467175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.467215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.467402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.467429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.467622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.467652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.467830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.467857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.468013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.468040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.468218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.468249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.468419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.468447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.468644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.468674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.468868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.468898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.469038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.469073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.420 qpair failed and we were unable to recover it. 00:24:59.420 [2024-07-26 12:25:52.469278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.420 [2024-07-26 12:25:52.469308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.469506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.469536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.469682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.469710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.469884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.469913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.470053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.470091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.470295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.470322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.470496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.470526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.470689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.470719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.470854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.470880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.471038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.471087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.471237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.471266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.471445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.471472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.471651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.471680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.471857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.471884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.472081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.472108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.472268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.472298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.472464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.472494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.472691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.472718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.472888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.472918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.473095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.473123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.473275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.473302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.473475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.473505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.473647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.473677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.473854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.473881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.474051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.474086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.474257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.474285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.474403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.474434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.474559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.474600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.474799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.474829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.474980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.475008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.475144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.475171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.475325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.421 [2024-07-26 12:25:52.475352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.421 qpair failed and we were unable to recover it. 00:24:59.421 [2024-07-26 12:25:52.475529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.475556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.475745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.475775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.475944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.475974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.476119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.476146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.476296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.476341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.476508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.476537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.476716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.476742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.476937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.476967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.477155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.477183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.477372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.477399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.477575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.477605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.477768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.477797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.477971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.477998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.478145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.478175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.478308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.478338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.478506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.478533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.478699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.478729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.478929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.478959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.479103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.479130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.479281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.479326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.479489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.479518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.479687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.479714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.479871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.479898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.480054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.480099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.480249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.480275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.480470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.480499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.480665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.480696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.480849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.480876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.481103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.481131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.481261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.481289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.481409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.481436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.481606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.422 [2024-07-26 12:25:52.481637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.422 qpair failed and we were unable to recover it. 00:24:59.422 [2024-07-26 12:25:52.481810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.481839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.482014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.482041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.482222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.482257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.482425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.482454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.482595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.482622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.482771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.482813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.482987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.483014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.483201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.483229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.483362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.483389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.483543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.483586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.483750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.483777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.483934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.483961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.484136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.484166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.484339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.484365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.484534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.484564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.484759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.484788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.484953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.484981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.485137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.485182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.485377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.485407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.485580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.485608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.485727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.485754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.485908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.485951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.486118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.486145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.486339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.486368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.486538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.486568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.486747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.486773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.486970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.486999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.487170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.487201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.487404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.487431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.487622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.487666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.487846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.487877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.488048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.488102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.423 qpair failed and we were unable to recover it. 00:24:59.423 [2024-07-26 12:25:52.488259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.423 [2024-07-26 12:25:52.488286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.488466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.488495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.488676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.488702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.488918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.488967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.489143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.489173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.489315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.489342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.489498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.489525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.489673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.489699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.489863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.489889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.490039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.490071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.490281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.490315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.490488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.490514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.490666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.490693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.490867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.490896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.491046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.491079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.491230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.491258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.491428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.491457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.491631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.491657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.491912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.491964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.492107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.492138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.492315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.492342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.492569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.492630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.492801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.492831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.492999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.493026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.493215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.493244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.493442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.493471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.493628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.493654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.493807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.493835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.494037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.494074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.494229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.494256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.424 [2024-07-26 12:25:52.494450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.424 [2024-07-26 12:25:52.494478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.424 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.494675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.494701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.494874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.494904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.495051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.495087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.495257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.495284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.495462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.495488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.495684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.495736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.495876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.495906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.496079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.496106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.496237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.496278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.496447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.496478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.496688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.496714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.496901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.496930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.497079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.497110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.497317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.497344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.497641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.497704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.497895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.497924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.498128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.498155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.498333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.498363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.498529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.498558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.498727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.498765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.498925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.498951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.499108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.499151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.499324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.499350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.499476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.499520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.499668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.499697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.499868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.499894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.500071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.500101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.500270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.500300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.500471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.500498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.500654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.500681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.500838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.500864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.500990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.501017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.501225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.501255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.501468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.501495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.425 [2024-07-26 12:25:52.501623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.425 [2024-07-26 12:25:52.501650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.425 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.501832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.501859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.502071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.502117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.502271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.502297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.502466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.502497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.502705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.502732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.502891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.502919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.503094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.503124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.503266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.503296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.503473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.503500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.503651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.503681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.503877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.503907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.504083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.504111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.504250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.504279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.504405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.504434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.504632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.504659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.504829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.504858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.505040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.505073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.505229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.505256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.505425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.505454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.505647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.505676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.505851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.505878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.505998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.506041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.506243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.506273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.506472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.506498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.506636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.506669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.506838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.506868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.507010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.426 [2024-07-26 12:25:52.507036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.426 qpair failed and we were unable to recover it. 00:24:59.426 [2024-07-26 12:25:52.507192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.507221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.507428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.507454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.507608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.507635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.507782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.507825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.507994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.508023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.508197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.508224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.508418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.508447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.508617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.508646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.508806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.508833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.508987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.509030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.509212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.509239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.509420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.509447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.509626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.509655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.509810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.509837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.510020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.510047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.510233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.510263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.510409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.510436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.510614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.510640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.510818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.510847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.511045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.511084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.511232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.511259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.511409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.511439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.511573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.511602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.511801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.511828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.511967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.511994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.512187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.512216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.512393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.512419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.512570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.512597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.512717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.512744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.512958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.512984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.513150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.513180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.513350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.427 [2024-07-26 12:25:52.513380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.427 qpair failed and we were unable to recover it. 00:24:59.427 [2024-07-26 12:25:52.513584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.513611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.513813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.513842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.514035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.514071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.514244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.514270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.514437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.514466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.514664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.514690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.514808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.514835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.515002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.515031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.515183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.515210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.515375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.515401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.515526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.515552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.515754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.515783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.515933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.515960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.516119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.516146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.516293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.516337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.516517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.516543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.516663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.516706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.516854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.516883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.517069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.517096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.517298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.517327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.517508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.517535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.517692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.517719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.517863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.517892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.518055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.518091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.518265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.518291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.518420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.518466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.518631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.518660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.518859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.518885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.519035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.519068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.519243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.519272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.522253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.428 [2024-07-26 12:25:52.522299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.428 qpair failed and we were unable to recover it. 00:24:59.428 [2024-07-26 12:25:52.522478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.522506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.522689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.522738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.522908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.522935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.523107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.523138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.523368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.523395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.523574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.523601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.523742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.523772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.523949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.523976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.524158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.524186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.524353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.524382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.524525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.524554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.524734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.524761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.524889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.524916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.525124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.525154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.525307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.525335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.525539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.525569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.525718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.525747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.525908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.525936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.526122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.526150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.526323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.526352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.526520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.526546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.526695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.526739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.526919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.526946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.527107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.527134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.527278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.527307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.527497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.527526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.527696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.527722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.527848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.527873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.528033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.528067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.528224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.528251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.528453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.528482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.528656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.528686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.528860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.528887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.429 [2024-07-26 12:25:52.529067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.429 [2024-07-26 12:25:52.529097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.429 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.529292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.529321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.529487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.529513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.529640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.529684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.529844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.529873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.530066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.530093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.530290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.530319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.530514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.530543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.530747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.530779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.530936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.530962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.531109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.531145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.531303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.531330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.531506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.531535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.531702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.531731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.531888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.531917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.532105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.532132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.532256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.532283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.532407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.532434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.532593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.532620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.532741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.532767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.532943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.532970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.533120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.533165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.533367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.533397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.533544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.533571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.533701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.533746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.533940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.533970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.534118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.534146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.534316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.534345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.534541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.534570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.534716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.534743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.534953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.534983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.535129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.535158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.535330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.535357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.535503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.535532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.535701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.430 [2024-07-26 12:25:52.535730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.430 qpair failed and we were unable to recover it. 00:24:59.430 [2024-07-26 12:25:52.535950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.535976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.536151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.536180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.536361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.536391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.536561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.536589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.536786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.536815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.536966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.536996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.537179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.537207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.537361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.537404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.537608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.537634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.537815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.537841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.538009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.538038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.538219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.538248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.538423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.538450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.538619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.538653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.538819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.538849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.539051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.539090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.539246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.539274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.539423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.539465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.539628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.539655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.539784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.539828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.539999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.540028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.540213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.540240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.540417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.540445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.540638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.540667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.540807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.540833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.541001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.541030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.431 [2024-07-26 12:25:52.541187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.431 [2024-07-26 12:25:52.541214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.431 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.541373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.541401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.541579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.541606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.541760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.541805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.542004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.542030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.542169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.542196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.542376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.542403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.542587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.542614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.542783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.542812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.542950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.542981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.543154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.543181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.543309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.543337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.543495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.543522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.543674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.543700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.543900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.543930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.544091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.544122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.544291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.544318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.544443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.544489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.544650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.544679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.544856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.544882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.545032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.545072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.545242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.545271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.545446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.545472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.545645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.545675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.545808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.545836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.546016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.546043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.546180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.546223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.546400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.546431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.546555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.546583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.546781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.546810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.546973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.547002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.547207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.547234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.432 [2024-07-26 12:25:52.547390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.432 [2024-07-26 12:25:52.547443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.432 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.547625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.547653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.547843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.547870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.548089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.548117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.548297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.548324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.548507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.548533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.548652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.548694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.548873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.548900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.549077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.549104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.549297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.549327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.549490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.549519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.549689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.549715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.549889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.549918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.550064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.550093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.550295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.550321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.550491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.550520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.550679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.550707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.550909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.550935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.551132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.551162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.551341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.551368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.551547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.551573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.551740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.551769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.551965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.551994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.552148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.552174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.552297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.552325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.552473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.552499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.552710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.552736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.552901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.552930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.553126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.553156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.553323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.553349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.553509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.553535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.553665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.553693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.553850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.553877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.554046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.433 [2024-07-26 12:25:52.554083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.433 qpair failed and we were unable to recover it. 00:24:59.433 [2024-07-26 12:25:52.554258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.554285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.554441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.554472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.554632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.554676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.554872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.554901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.555046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.555092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.555223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.555250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.555423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.555452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.555654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.555681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.555855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.555885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.556067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.556097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.556278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.556305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.556437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.556464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.556642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.556668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.556850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.556877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.557047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.557084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.557266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.557294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.557424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.557450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.557604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.557630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.557800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.557829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.557998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.558024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.558157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.558199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.558373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.558403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.558575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.558601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.558722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.558765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.558917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.558946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.559119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.559147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.559319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.559348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.559513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.559542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.559749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.559776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.559908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.559934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.560080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.560107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.560263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.560290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.434 [2024-07-26 12:25:52.560438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.434 [2024-07-26 12:25:52.560469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.434 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.560671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.560700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.560872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.560902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.561112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.561139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.561322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.561364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.561507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.561533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.561688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.561730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.561897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.561926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.562094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.562121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.562293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.562326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.562533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.562559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.562711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.562738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.562893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.562936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.563096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.563126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.563279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.563305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.563461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.563487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.563692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.563721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.563893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.563919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.564044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.564096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.564268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.564298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.564501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.564527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.564668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.564697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.564860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.564890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.565075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.565103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.565251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.565278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.565455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.565484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.565687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.565713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.565890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.565919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.566097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.566123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.566276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.566303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.566489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.566518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.566718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.566748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.566957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.566984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.435 [2024-07-26 12:25:52.567155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.435 [2024-07-26 12:25:52.567185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.435 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.567390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.567417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.567601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.567627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.567834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.567863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.568030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.568057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.568208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.568234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.568363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.568406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.568604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.568633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.568812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.568839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.568970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.569012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.569210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.569237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.569417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.569443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.569586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.569616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.569789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.569818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.569961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.569987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.570167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.570196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.570393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.570429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.570577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.570603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.570757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.570801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.570964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.570993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.571142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.571170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.571374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.571404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.571573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.571602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.571773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.571800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.571922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.571966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.572164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.572194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.436 [2024-07-26 12:25:52.572400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.436 [2024-07-26 12:25:52.572427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.436 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.572575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.572605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.572741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.572770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.572911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.572938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.573071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.573099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.573256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.573299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.573439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.573466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.573626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.573653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.573811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.573838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.573967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.573994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.574190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.574220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.574389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.574418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.574566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.574593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.574750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.574792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.574965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.574994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.575170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.575198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.575396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.575425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.575617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.575644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.575800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.575828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.576002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.576032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.576222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.576249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.576427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.576453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.576628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.576658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.576808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.576836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.577016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.577042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.577191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.577221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.577386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.577415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.577599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.577626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.577780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.577807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.577989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.578018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.578224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.578255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.437 qpair failed and we were unable to recover it. 00:24:59.437 [2024-07-26 12:25:52.578396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.437 [2024-07-26 12:25:52.578425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.578599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.578629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.578784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.578811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.578988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.579018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.579232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.579262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.579459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.579486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.579651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.579680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.579845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.579874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.580030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.580056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.580224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.580250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.580427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.580457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.580631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.580658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.580854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.580883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.581095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.581122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.581279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.581306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.581481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.581510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.581710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.581737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.581889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.581916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.582118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.582148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.582283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.582312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.582476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.582502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.582674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.582703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.582864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.582893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.583139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.583166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.583361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.583391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.583564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.583593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.583769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.583796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.583961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.583990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.584141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.584169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.584357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.584384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.584588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.584617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.438 qpair failed and we were unable to recover it. 00:24:59.438 [2024-07-26 12:25:52.584786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.438 [2024-07-26 12:25:52.584814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.585034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.585069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.585222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.585251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.585441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.585470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.585668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.585695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.585825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.585852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.586038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.586092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.586272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.586300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.586499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.586533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.586701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.586730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.586899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.586926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.587130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.587161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.587363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.587392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.587559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.587585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.587717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.587745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.587941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.587970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.588154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.588181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.588359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.588389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.588538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.588567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.588765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.588792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.588968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.588999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.589166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.589196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.589383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.589410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.589540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.589567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.589691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.589717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.589899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.589925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.590056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.590087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.590280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.590309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.590508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.590535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.590670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.590696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.590899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.590928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.591101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.591129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.591304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.439 [2024-07-26 12:25:52.591348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.439 qpair failed and we were unable to recover it. 00:24:59.439 [2024-07-26 12:25:52.591513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.591549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.591753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.591780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.591981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.592011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.592208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.592235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.592368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.592395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.592596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.592627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.592771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.592798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.592978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.593005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.593188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.593218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.593382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.593411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.593588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.593615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.593759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.593788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.593966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.593994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.594152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.594179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.594329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.594355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.594569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.594599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.594796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.594823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.594997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.595027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.595229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.595256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.595438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.595465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.595613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.595642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.595817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.595844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.596019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.596046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.596244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.596274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.596449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.596475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.596602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.596628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.596805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.596835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.597007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.597037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.597218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.597244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.597405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.597432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.597584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.597629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.597808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.597834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.597958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.597985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.598126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.598154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.598375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.598401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.440 [2024-07-26 12:25:52.598606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.440 [2024-07-26 12:25:52.598635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.440 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.598780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.598809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.598987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.599013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.599219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.599249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.599444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.599473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.599625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.599652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.599831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.599857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.600071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.600101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.600281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.600308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.600428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.600472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.600623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.600650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.600800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.600826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.600983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.601009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.601170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.601198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.601353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.601380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.601531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.601560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.601758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.601785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.601935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.601961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.602139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.602169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.602310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.602340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.602542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.602573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.602751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.602780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.602912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.602941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.603086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.603113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.603268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.603295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.603451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.603495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.603698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.603725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.603866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.603895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.604091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.604121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.604291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.604318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.604494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.604524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.604719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.604748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.604953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.604980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.605152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.605181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.605361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.605391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.441 qpair failed and we were unable to recover it. 00:24:59.441 [2024-07-26 12:25:52.605542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.441 [2024-07-26 12:25:52.605569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.605714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.605757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.605926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.605955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.606143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.606170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.606334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.606361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.606491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.606517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.606647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.606674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.606837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.606880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.607056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.607092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.607265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.607292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.607470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.607499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.607663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.607691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.607834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.607862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.608064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.608094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.608237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.608267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.608467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.608494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.608668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.608697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.608870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.608897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.609074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.609102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.609252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.609281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.609444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.609474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.609622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.609649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.609804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.609830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.609979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.610022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.610211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.610238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.610371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.610404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.610610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.610639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.610838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.610865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.611036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.611073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.611274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.611301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.611484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.611510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.611721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.611748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.611928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.611955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.612141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.612168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.612327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.612354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.612481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.612507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.612702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.612729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.442 [2024-07-26 12:25:52.612883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.442 [2024-07-26 12:25:52.612909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.442 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.613097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.613125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.613316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.613343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.613494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.613523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.613665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.613695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.613873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.613901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.614056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.614088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.614212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.614254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.614462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.614489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.614632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.614662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.614856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.614887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.615082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.615109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.615257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.615287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.615421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.615450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.615630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.615656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.615831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.615861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.616041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.616075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.616234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.616261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.616394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.616437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.616633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.616663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.616814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.616841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.616995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.617022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.617247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.617277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.617451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.617478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.617637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.617664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.617844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.617870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.618038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.618090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.618233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.618260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.618437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.618471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.618621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.618649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.618803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.618829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.619025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.619054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.619250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.619277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.619405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.619432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.619592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.619633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.619802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.619829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.619948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.619990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.443 [2024-07-26 12:25:52.620174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.443 [2024-07-26 12:25:52.620202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.443 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-26 12:25:52.620356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-26 12:25:52.620383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-26 12:25:52.620538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-26 12:25:52.620565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.444 [2024-07-26 12:25:52.620766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.444 [2024-07-26 12:25:52.620796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.444 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.620988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.621016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.621204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.621244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.621422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.621450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.621633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.621659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.621853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.621881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.622052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.622089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.622239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.622264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.622441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.622484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.622693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.622720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.622873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.622901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.623104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.623148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.623322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.623366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.623568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.623595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.623739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.623768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.623937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.623971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.624152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.624179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.624332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.624360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.624535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.624564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.624701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.624728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.624862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.725 [2024-07-26 12:25:52.624888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.725 qpair failed and we were unable to recover it. 00:24:59.725 [2024-07-26 12:25:52.625045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.625079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.625216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.625244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.625418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.625447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.625589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.625618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.625793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.625820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.625985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.626011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.626141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.626168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.626322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.626348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.626500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.626527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.626674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.626718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.626903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.626929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.627082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.627110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.627317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.627347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.627527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.627554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.627998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.628029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.628212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.628239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.628382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.628409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.628564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.628592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.628784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.628811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.628983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.629013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.629205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.629233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.629369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.629412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.629581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.629607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.629776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.629805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.629981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.630010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.630186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.630214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.630353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.630379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.630579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.630608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.630762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.630789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.630916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.630958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.631120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.631150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.631326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.631353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.631531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.631560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.631699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.631729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.631905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.726 [2024-07-26 12:25:52.631935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.726 qpair failed and we were unable to recover it. 00:24:59.726 [2024-07-26 12:25:52.632135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.632165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.632333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.632362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.632540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.632567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.632735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.632764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.632941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.632970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.633149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.633177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.633304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.633348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.633555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.633582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.633733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.633759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.633925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.633954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.634113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.634143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.634282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.634309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.634461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.634505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.634649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.634680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.634883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.634910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.635082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.635112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.635315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.635344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.635498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.635525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.635724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.635753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.635908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.635935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.636131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.636158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.636310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.636336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.636467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.636494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.636649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.636676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.636818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.636847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.636981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.637010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.637212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.637239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.637439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.637468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.637637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.637666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.637870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.637899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.638075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.638105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.638300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.638327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.638519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.638545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.727 [2024-07-26 12:25:52.638742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.727 [2024-07-26 12:25:52.638771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.727 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.638917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.638943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.639121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.639147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.639322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.639353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.639550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.639580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.639774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.639801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.639934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.639965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.640120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.640163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.640352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.640379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.640577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.640607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.640775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.640804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.640973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.640999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.641167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.641197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.641364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.641393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.641576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.641602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.641728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.641755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.641935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.641963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.642144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.642170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.642317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.642346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.642530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.642556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.642688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.642715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.642876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.642902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.643067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.643097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.643300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.643327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.643477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.643507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.643645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.643676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.643821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.643849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.644003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.644047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.644199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.644228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.644406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.644432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.644589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.644616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.644794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.644824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.644987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.645016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.645211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.645239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.645389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.645433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.728 qpair failed and we were unable to recover it. 00:24:59.728 [2024-07-26 12:25:52.645609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.728 [2024-07-26 12:25:52.645636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.645831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.645861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.646033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.646069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.646223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.646250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.646423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.646452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.646615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.646644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.646822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.646849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.647004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.647049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.647236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.647263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.647394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.647421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.647575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.647601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.647797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.647831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.647978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.648005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.648162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.648205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.648344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.648373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.648544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.648570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.648737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.648766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.648938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.648967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.649150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.649177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.649351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.649380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.649558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.649585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.649716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.649743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.649935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.649964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.650121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.650151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.650305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.650331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.650517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.650544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.650701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.650728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.650872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.650902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.651071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.651100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.651272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.651299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.651446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.651472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.651634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.651664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.651843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.651869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.652049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.652085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.652259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.652289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.729 [2024-07-26 12:25:52.652482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.729 [2024-07-26 12:25:52.652511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.729 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.652683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.652710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.652834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.652880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.653093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.653123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.653278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.653304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.653432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.653457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.653634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.653679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.653885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.653912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.654094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.654124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.654294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.654336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.654498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.654525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.654654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.654697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.654877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.654908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.655113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.655140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.655280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.655308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.655476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.655505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.655687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.655724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.655906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.655936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.656109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.656138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.656307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.656335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.656490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.656538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.656747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.656774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.656902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.656929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.657088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.657146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.657308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.657338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.657480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.657507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.657639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.657683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.657908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.657944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.658081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.730 [2024-07-26 12:25:52.658109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.730 qpair failed and we were unable to recover it. 00:24:59.730 [2024-07-26 12:25:52.658264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.658290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.658512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.658543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.658731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.658758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.658911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.658940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.659140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.659171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.659357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.659385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.659545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.659572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.659710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.659755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.659929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.659957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.660087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.660112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.660246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.660273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.660433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.660461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.660659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.660689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.660884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.660914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.661091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.661119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.661285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.661318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.661502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.661531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.661709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.661738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.661932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.661971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.662147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.662177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.662322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.662358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.662486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.662528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.662703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.662734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.662891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.662918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.663050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.663084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.663246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.663292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.663472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.663499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.663639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.663670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.663801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.663852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.664040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.664072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.664248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.664278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.664425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.664455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.664620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.664649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.664857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.731 [2024-07-26 12:25:52.664886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.731 qpair failed and we were unable to recover it. 00:24:59.731 [2024-07-26 12:25:52.665071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.665116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.665242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.665268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.665447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.665477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.665612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.665642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.665796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.665829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.665995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.666031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.666172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.666200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.666364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.666392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.666570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.666601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.666785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.666814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.666990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.667017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.667168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.667210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.667385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.667415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.667614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.667646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.667819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.667859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.668045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.668082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.668221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.668249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.668382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.668427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.668639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.668667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.668832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.668860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.669040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.669091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.669240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.669282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.669428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.669456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.669654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.669685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.669848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.669875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.670004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.670032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.670176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.670222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.670367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.670397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.670594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.670629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.670848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.670879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.671076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.671107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.671259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.671294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.671457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.671506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.671711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.671745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.671919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.671948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.732 [2024-07-26 12:25:52.672156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.732 [2024-07-26 12:25:52.672185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.732 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.672315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.672369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.672547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.672574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.672777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.672809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.672980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.673010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.673202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.673229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.673360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.673387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.673521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.673558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.673692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.673719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.673840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.673882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.674079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.674117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.674280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.674308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.674469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.674495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.674643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.674687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.674874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.674911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.675043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.675077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.675214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.675258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.675421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.675449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.675629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.675660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.675820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.675849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.675999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.676025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.676195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.676252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.676429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.676459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.676608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.676635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.676838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.676874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.677028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.677076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.677257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.677283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.677417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.677455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.677659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.677689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.677901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.677928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.678132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.678163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.678342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.678369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.678513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.678540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.678744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.678775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.678917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.678959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.679176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.733 [2024-07-26 12:25:52.679204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.733 qpair failed and we were unable to recover it. 00:24:59.733 [2024-07-26 12:25:52.679355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.679385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.679560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.679595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.679786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.679817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.679974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.680027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.680197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.680233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.680398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.680426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.680601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.680631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.680774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.680813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.680997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.681029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.681178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.681205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.681362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.681389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.681516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.681543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.681677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.681728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.681867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.681897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.682057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.682095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.682222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.682248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.682440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.682474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.682661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.682688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.682814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.682841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.682979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.683006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.683165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.683193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.683394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.683424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.683630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.683657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.683821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.683848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.684006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.684038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.684222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.684252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.684426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.684453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.684631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.684667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.684868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.684895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.685053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.685090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.685296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.685331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.685479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.685508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.685655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.685683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.685840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.685883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.686033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.734 [2024-07-26 12:25:52.686082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.734 qpair failed and we were unable to recover it. 00:24:59.734 [2024-07-26 12:25:52.686232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.686259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.686470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.686500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.686671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.686701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.686882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.686910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.687088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.687118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.687289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.687319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.687520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.687547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.687702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.687737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.687911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.687941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.688114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.688143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.688314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.688345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.688518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.688549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.688700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.688728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.688888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.688917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.689099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.689128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.689284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.689311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.689452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.689483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.689650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.689681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.689828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.689858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.690052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.690090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.690263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.690294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.690453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.690480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.690679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.690708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.690868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.690897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.691082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.691116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.691305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.691335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.691502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.691532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.691689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.691723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.691862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.691890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.692069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.692113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.735 qpair failed and we were unable to recover it. 00:24:59.735 [2024-07-26 12:25:52.692311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.735 [2024-07-26 12:25:52.692341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.692535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.692569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.692743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.692772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.692968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.692998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.693166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.693194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.693350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.693377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.693530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.693557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.693690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.693718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.693895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.693931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.694112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.694139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.694275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.694301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.694481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.694520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.694678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.694705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.694829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.694857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.695008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.695036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.695234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.695261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.695434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.695463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.695636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.695669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.695855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.695892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.696032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.696070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.696233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.696272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.696448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.696479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.696630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.696658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.696816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.696842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.696995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.697021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.697178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.697232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.697377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.697406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.697580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.697607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.697761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.697795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.697928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.697972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.698148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.698175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.698357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.698387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.698547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.698578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.698800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.698827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.698996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.699027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.699207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.736 [2024-07-26 12:25:52.699238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.736 qpair failed and we were unable to recover it. 00:24:59.736 [2024-07-26 12:25:52.699371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.699398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.699527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.699554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.699733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.699760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.699916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.699944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.700107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.700134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.700311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.700340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.700543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.700570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.700729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.700767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.700911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.700940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.701108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.701334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.701375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.701589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.701619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.701798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.701827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.701974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.702004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.702179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.702210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.702359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.702386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.702528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.702556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.702746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.702776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.702918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.702945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.703144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.703175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.703320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.703350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.703517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.703555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.703742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.703771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.703971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.703998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.704151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.704179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.704324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.704354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.704541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.704568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.704732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.704768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.704932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.704962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.705122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.705150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.705280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.705307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.705463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.705491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.705664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.705694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.705846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.705873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.737 [2024-07-26 12:25:52.706040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.737 [2024-07-26 12:25:52.706089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.737 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.706265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.706296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.706492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.706519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.706726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.706755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.706907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.706943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.707125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.707152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.707329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.707360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.707537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.707576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.707759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.707786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.707962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.707992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.708161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.708191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.708352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.708385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.708506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.708532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.708724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.708750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.708932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.708970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.709118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.709148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.709311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.709341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.709534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.709562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.709693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.709721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.709853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.709880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.710035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.710071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.710257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.710287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.710455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.710486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.710656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.710684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.710858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.710885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.711022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.711054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.711288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.711315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.711472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.711516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.711713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.711743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.711921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.711954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.712122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.712152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.712328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.712362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.712522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.712550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.712751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.712781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.712922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.712959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.738 qpair failed and we were unable to recover it. 00:24:59.738 [2024-07-26 12:25:52.713119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.738 [2024-07-26 12:25:52.713147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.713279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.713310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.713433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.713460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.713589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.713616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.713796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.713833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.713967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.713997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.714175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.714206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.714349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.714376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.714580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.714610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.714794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.714827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.715038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.715078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.715244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.715271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.715431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.715466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.715651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.715689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.715902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.715931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.716107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.716135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.716261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.716297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.716435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.716462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.716646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.716677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.716835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.716880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.717084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.717117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.717301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.717329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.717494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.717524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.717659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.717686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.717856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.717883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.718067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.718098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.718277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.718308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.718494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.718521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.718761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.718812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.718976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.719006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.719187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.719214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.719389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.719419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.719587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.719625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.719793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.719822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.719976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.720003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.739 [2024-07-26 12:25:52.720149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.739 [2024-07-26 12:25:52.720193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.739 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.720346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.720373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.720522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.720569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.720730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.720759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.720936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.720964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.721146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.721177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.721379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.721409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.721554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.721581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.721757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.721787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.721995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.722022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.722152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.722180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.722344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.722388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.722556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.722586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.722788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.722814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.722988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.723029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.723213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.723241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.723405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.723432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.723634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.723663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.723809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.723840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.724041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.724073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.724254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.724284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.724453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.724487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.724645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.724672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.724842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.724872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.725073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.725110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.725264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.725293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.725442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.725469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.725658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.725688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.725868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.725896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.726074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.726105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.726277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.740 [2024-07-26 12:25:52.726307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.740 qpair failed and we were unable to recover it. 00:24:59.740 [2024-07-26 12:25:52.726483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.726511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.726686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.726718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.726887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.726917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.727077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.727104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.727272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.727301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.727422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.727450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.727604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.727632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.727761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.727790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.727948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.727976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.728122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.728150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.728314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.728348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.728542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.728572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.728726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.728754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.728931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.728960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.729131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.729159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.729340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.729369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.729568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.729599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.729783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.729810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.729988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.730018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.730204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.730231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.730409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.730439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.730591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.730619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.730790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.730821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.730987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.731018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.731208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.731237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.731392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.731419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.731572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.731599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.731726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.731754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.731949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.731980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.732180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.732212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.732391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.732418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.732555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.732582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.732740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.732767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.732912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.732951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.733106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.741 [2024-07-26 12:25:52.733137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.741 qpair failed and we were unable to recover it. 00:24:59.741 [2024-07-26 12:25:52.733332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.733359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.733521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.733548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.733694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.733721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.733922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.733953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.734157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.734184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.734329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.734358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.734533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.734564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.734739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.734772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.734900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.734949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.735115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.735149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.735348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.735376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.735553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.735583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.735781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.735811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.735966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.735995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.736149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.736177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.736376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.736405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.736564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.736594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.736748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.736792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.736968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.736998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.737180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.737208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.737380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.737410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.737589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.737617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.737772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.737799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.738004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.738035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.738251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.738302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.738497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.738537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.738771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.738808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.738977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.739008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.739194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.739222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.739400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.739437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.739654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.739682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.739816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.739842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.739976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.740020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.740212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.740269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.740444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.740481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.740658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.742 [2024-07-26 12:25:52.740695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.742 qpair failed and we were unable to recover it. 00:24:59.742 [2024-07-26 12:25:52.740863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.740901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.741079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.741108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.741227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.741259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.741455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.741485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.741631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.741660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.741846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.741888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.742049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.742102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.742288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.742319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.742446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.742475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.742667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.742697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.742870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.742900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.743053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.743087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.743260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.743299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.743493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.743530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.743696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.743737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.743935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.743975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.744176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.744206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.744380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.744411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.744575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.744604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.744767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.744794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.744977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.745018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.745185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.745218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.745364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.745391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.745542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.745569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.745731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.745760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.745901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.745928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.746129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.746160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.746397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.746457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.746660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.746697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.746872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.746918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.747114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.747158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.747338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.747368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.747564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.747594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.747775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.747802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.747960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.747997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.743 [2024-07-26 12:25:52.748177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.743 [2024-07-26 12:25:52.748218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.743 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.748432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.748463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.748627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.748655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.748829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.748862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.749073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.749101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.749251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.749279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.749449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.749479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.749650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.749680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.749838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.749865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.750040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.750076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.750246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.750274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.750452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.750479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.750647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.750677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.750876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.750906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.751078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.751105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.751282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.751314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.751482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.751513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.751665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.751692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.751899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.751928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.752080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.752111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.752261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.752291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.752447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.752479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.752632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.752680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.752835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.752863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.752999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.753041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.753252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.753279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.753408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.753437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.753611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.753641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.753802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.753829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.754022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.754049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.754235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.754265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.754450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.754477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.754657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.754684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.754860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.754896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.755081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.755112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.755293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.744 [2024-07-26 12:25:52.755321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.744 qpair failed and we were unable to recover it. 00:24:59.744 [2024-07-26 12:25:52.755495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.755528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.755695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.755725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.755870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.755898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.756085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.756119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.756259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.756289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.756464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.756490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.756698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.756729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.756922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.756952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.757090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.757118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.757270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.757300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.757491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.757521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.757691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.757718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.757850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.757878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.758010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.758037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.758179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.758207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.758376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.758406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.758574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.758603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.758754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.758781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.758911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.758954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.759135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.759162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.759315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.759342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.759490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.759519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.759714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.759743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.759892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.759919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.760040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.760075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.760272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.760306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.760475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.760502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.760637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.760664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.760824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.760867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.761013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.761040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.761201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.745 [2024-07-26 12:25:52.761244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.745 qpair failed and we were unable to recover it. 00:24:59.745 [2024-07-26 12:25:52.761419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.761450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.761626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.761653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.761856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.761886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.762053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.762090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.762271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.762299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.762414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.762441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.762567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.762595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.762786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.762813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.762968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.762999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.763169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.763199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.763352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.763379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.763540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.763583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.763725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.763756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.763925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.763953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.764092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.764135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.764282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.764312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.764518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.764547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.764695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.764726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.764924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.764955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.765094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.765122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.765260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.765287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.765469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.765499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.765642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.765669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.765830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.765857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.765981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.766009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.766160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.766188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.766335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.766365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.766515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.766542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.766725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.766752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.766919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.766949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.767114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.767145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.767324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.767351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.767498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.767525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.767663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.767692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.767873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.746 [2024-07-26 12:25:52.767904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.746 qpair failed and we were unable to recover it. 00:24:59.746 [2024-07-26 12:25:52.768057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.768092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.768268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.768297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.768497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.768524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.768665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.768695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.768858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.768887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.769069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.769096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.769248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.769278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.769446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.769476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.769630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.769658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.769838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.769864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.770036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.770074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.770250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.770277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.770401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.770428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.770562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.770590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.770767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.770793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.770928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.770974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.771173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.771204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.771382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.771409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.771556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.771587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.771727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.771757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.771929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.771959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.772131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.772158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.772280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.772307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.772521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.772548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.772680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.772709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.772876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.772907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.773116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.773144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.773319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.773349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.773539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.773569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.773748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.773775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.773932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.773959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.774093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.774137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.774295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.774321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.774508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.774535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.774716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.747 [2024-07-26 12:25:52.774746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.747 qpair failed and we were unable to recover it. 00:24:59.747 [2024-07-26 12:25:52.774924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.774951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.775125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.775156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.775328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.775358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.775555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.775582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.775753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.775788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.775986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.776015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.776188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.776216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.776394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.776422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.776628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.776658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.776826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.776853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.776972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.776999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.777125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.777153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.777304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.777331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.777530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.777560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.777726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.777756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.777911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.777949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.778153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.778183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.778354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.778383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.778560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.778588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.778790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.778821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.778953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.778983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.779134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.779163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.779348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.779378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.779576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.779607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.779786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.779813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.779969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.779997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.780141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.780172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.780352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.780380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.780556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.780589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.780733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.780763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.780961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.780991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.781160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.781198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.781372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.781403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.781576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.781603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.781760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.781787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.748 qpair failed and we were unable to recover it. 00:24:59.748 [2024-07-26 12:25:52.781939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.748 [2024-07-26 12:25:52.781966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.782118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.782145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.782285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.782315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.782523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.782550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.782703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.782731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.782907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.782936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.783107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.783135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.783290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.783317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.783492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.783521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.783683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.783721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.783898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.783924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.784093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.784123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.784294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.784323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.784465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.784492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.784693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.784722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.784917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.784947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.785125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.785152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.785312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.785339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.785489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.785534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.785707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.785734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.785862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.785907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.786098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.786129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.786307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.786333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.786488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.786515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.786672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.786699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.786862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.786888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.787084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.787129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.787257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.787284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.787441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.787468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.787643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.787674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.787847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.787874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.788023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.788050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.788232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.788262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.788429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.788460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.788664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.788691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.788857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.749 [2024-07-26 12:25:52.788887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.749 qpair failed and we were unable to recover it. 00:24:59.749 [2024-07-26 12:25:52.789085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.789115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.789294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.789322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.789494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.789523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.789717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.789747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.789884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.789911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.790069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.790112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.790264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.790292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.790474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.790501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.790665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.790694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.790876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.790902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.791053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.791088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.791226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.791256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.791415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.791445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.791613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.791644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.791809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.791839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.791975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.792004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.792183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.792211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.792412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.792442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.792619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.792646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.792797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.792823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.792995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.793025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.793209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.793236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.793367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.793394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.793519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.793546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.793774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.793800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.794004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.794031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.794226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.794254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.794410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.794437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.794615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.794642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.794792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.794842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.795011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.750 [2024-07-26 12:25:52.795041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.750 qpair failed and we were unable to recover it. 00:24:59.750 [2024-07-26 12:25:52.795188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.795215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.795371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.795414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.795555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.795584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.795789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.795816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.795979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.796009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.796175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.796206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.796362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.796389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.796515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.796542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.796697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.796725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.796885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.796912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.797087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.797118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.797291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.797321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.797494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.797521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.797687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.797717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.797882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.797911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.798054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.798087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.798286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.798315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.798488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.798517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.798689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.798716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.798915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.798945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.799125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.799152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.799305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.799332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.799526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.799560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.799718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.799747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.799888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.799919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.800096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.800127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.800298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.800328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.800530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.800557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.800738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.800767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.800961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.800990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.801193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.801221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.801445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.801474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.801673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.801702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.801905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.801932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.802087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.751 [2024-07-26 12:25:52.802114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.751 qpair failed and we were unable to recover it. 00:24:59.751 [2024-07-26 12:25:52.802307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.802336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.802545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.802572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.802721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.802748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.802893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.802919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.803071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.803099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.803266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.803296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.803487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.803516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.803662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.803689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.803878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.803905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.804091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.804121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.804298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.804325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.804524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.804554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.804719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.804749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.804949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.804977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.805165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.805209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.805426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.805457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.805610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.805637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.805891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.805942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.806135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.806165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.806343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.806369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.806498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.806526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.806657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.806684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.806862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.806889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.807104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.807131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.807253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.807280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.807435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.807462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.807681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.807731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.807928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.807962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.808146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.808173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.808372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.808401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.808539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.808569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.808743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.808769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.808935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.808964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.809150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.809177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.809333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.752 [2024-07-26 12:25:52.809359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.752 qpair failed and we were unable to recover it. 00:24:59.752 [2024-07-26 12:25:52.809594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.809649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.809852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.809881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.810032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.810064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.810232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.810261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.810403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.810433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.810605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.810631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.810905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.810957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.811157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.811187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.811347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.811373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.811552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.811578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.811776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.811805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.811988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.812014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.812159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.812189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.812352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.812383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.812549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.812584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.812760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.812829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.813020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.813049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.813257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.813284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.813485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.813536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.813734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.813764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.813937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.813973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.814154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.814183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.814353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.814383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.814529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.814556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.814710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.814753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.814952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.814981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.815155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.815182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.815317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.815345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.815570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.815600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.815798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.815824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.816033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.816068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.816205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.816245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.816399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.816430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.816594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.816621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.816798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.816827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.753 qpair failed and we were unable to recover it. 00:24:59.753 [2024-07-26 12:25:52.816980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.753 [2024-07-26 12:25:52.817007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.817210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.817255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.817447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.817479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.817662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.817689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.817857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.817910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.818092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.818129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.818286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.818315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.818450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.818477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.818632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.818659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.818843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.818870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.819082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.819114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.819291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.819321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.819500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.819526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.819740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.819793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.819927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.819956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.820120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.820148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.820297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.820340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.820509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.820538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.820673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.820699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.820831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.820857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.821051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.821087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.821269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.821296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.821498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.821552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.821723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.821753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.821930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.821961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.822133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.822163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.822301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.822331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.822535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.822562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.822739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.822766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.822961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.822990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.823171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.823198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.823423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.823472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.823636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.823664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.823815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.823850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.754 [2024-07-26 12:25:52.823974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.754 [2024-07-26 12:25:52.823999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.754 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.824178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.824204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.824358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.824384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.824599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.824662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.824867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.824896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.825072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.825098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.825278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.825307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.825476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.825505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.825656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.825683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.825830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.825873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.826047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.826082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.826266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.826293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.826467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.826539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.826712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.826741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.826923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.826949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.827102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.827129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.827319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.827346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.827505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.827532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.827726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.827755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.827922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.827951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.828097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.828124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.828276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.828319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.828515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.828544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.828699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.828725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.828891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.828919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.829105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.829133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.829306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.829332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.829459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.829487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.829667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.829712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.829881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.829911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.830053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.830104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.830262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.830288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.755 qpair failed and we were unable to recover it. 00:24:59.755 [2024-07-26 12:25:52.830440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.755 [2024-07-26 12:25:52.830468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.830678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.830707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.830903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.830932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.831135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.831162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.831360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.831389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.831562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.831591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.831736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.831764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.831921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.831948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.832121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.832151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.832340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.832367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.832533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.832562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.832768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.832799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.832929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.832956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.833152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.833182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.833326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.833356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.833522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.833549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.833679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.833705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.833861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.833891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.834071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.834098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.834232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.834258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.834432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.834459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.834609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.834636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.834801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.834830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.834983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.835010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.835169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.835196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.835348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.835392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.835568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.835595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.835749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.835775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.835951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.835980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.836129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.836158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.836295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.836321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.836469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.836512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.836641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.836670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.836846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.836872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.836997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.837024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.756 qpair failed and we were unable to recover it. 00:24:59.756 [2024-07-26 12:25:52.837192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.756 [2024-07-26 12:25:52.837219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.837343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.837370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.837526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.837553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.837734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.837763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.837970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.837997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.838172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.838202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.838373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.838402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.838578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.838605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.838809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.838838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.838984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.839023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.839180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.839207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.839339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.839377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.839511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.839537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.839713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.839740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.839865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.839893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.840062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.840089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.840245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.840275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.840474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.840503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.840653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.840680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.840835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.840862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.841039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.841074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.841221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.841247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.841427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.841453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.841620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.841649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.841823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.841852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.842027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.842053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.842199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.842229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.842373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.842402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.842580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.842607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.842767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.842796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.842970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.843000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.843186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.843212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.843380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.843409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.843613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.843639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.843818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.843845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.757 [2024-07-26 12:25:52.844038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.757 [2024-07-26 12:25:52.844081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.757 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.844243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.844272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.844450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.844478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.844594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.844637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.844830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.844860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.845042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.845077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.845204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.845246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.845411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.845440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.845649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.845676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.845831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.845868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.846072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.846102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.846296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.846322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.846520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.846548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.846676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.846705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.846842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.846869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.846990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.847017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.847212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.847239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.847388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.847415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.847575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.847601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.847787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.847814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.848029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.848068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.848251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.848282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.848473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.848499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.848621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.848648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.848784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.848827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.849002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.849031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.849197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.849224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.849371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.849398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.849589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.849617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.849776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.849802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.849932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.849975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.850145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.850174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.850346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.850373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.850509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.850554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.850753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.850779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.758 [2024-07-26 12:25:52.850910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.758 [2024-07-26 12:25:52.850936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.758 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.851084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.851139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.851321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.851348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.851529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.851556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.851733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.851762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.851907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.851937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.852113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.852140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.852326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.852353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.852561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.852588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.852740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.852766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.852942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.852982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.853168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.853197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.853379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.853405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.853544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.853577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.853750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.853779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.853969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.853998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.854150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.854176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.854338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.854364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.854521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.854547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.854682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.854708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.854872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.854899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.855050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.855107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.855285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.855326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.855503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.855530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.855686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.855713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.855878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.855907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.856073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.856111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.856266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.856293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.856504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.856533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.856688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.856716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.856873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.856901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.857112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.857141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.759 qpair failed and we were unable to recover it. 00:24:59.759 [2024-07-26 12:25:52.857325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.759 [2024-07-26 12:25:52.857354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.857507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.857533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.857691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.857733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.857903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.857932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.858112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.858139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.858300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.858335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.858513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.858540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.858782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.858808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.858984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.859013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.859185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.859212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.859391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.859418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.859575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.859601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.859752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.859778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.859954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.859983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.860131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.860157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.860288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.860315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.860533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.860559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.860682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.860709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.860864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.860890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.861078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.861115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.861295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.861324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.861494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.861528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.861733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.861760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.861918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.861944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.862148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.862178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.862359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.862385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.862535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.862561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.862712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.862738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.862894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.862931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.863129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.863159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.863329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.863362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.863558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.863584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.863754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.863782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.863987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.864013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.864160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.864187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.760 qpair failed and we were unable to recover it. 00:24:59.760 [2024-07-26 12:25:52.864318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.760 [2024-07-26 12:25:52.864345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.864473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.864500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.864657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.864682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.864811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.864837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.865027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.865056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.865253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.865279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.865448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.865477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.865639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.865667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.865812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.865839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.866036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.866072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.866211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.866240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.866384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.866410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.866562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.866606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.866792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.866821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.866996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.867023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.867177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.867204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.867361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.867387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.867518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.867545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.867701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.867727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.867856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.867883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.868047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.868082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.868256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.868285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.868455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.868484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.868639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.868666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.868852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.868878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.869052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.869088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.869259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.869289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.869444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.869471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.869653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.869679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.869859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.869884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.870067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.870097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.870277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.870304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.870485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.870511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.870631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.870657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.870835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.870860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.871075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.761 [2024-07-26 12:25:52.871120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.761 qpair failed and we were unable to recover it. 00:24:59.761 [2024-07-26 12:25:52.871269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.871295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.871499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.871528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.871701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.871728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.871865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.871903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.872109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.872139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.872290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.872317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.872467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.872510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.872715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.872741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.872903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.872932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.873109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.873138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.873274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.873302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.873479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.873505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.873671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.873699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.873833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.873862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.874037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.874096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.874274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.874303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.874454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.874481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.874642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.874668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.874875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.874903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.875162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.875191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.875381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.875406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.875584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.875613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.875786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.875814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.875987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.876013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.876165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.876193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.876394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.876422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.876598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.876624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.876758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.876785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.876989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.877018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.877182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.877209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.877387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.877416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.877594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.877622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.877804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.877847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.878043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.878078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.878242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.878268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.762 [2024-07-26 12:25:52.878456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.762 [2024-07-26 12:25:52.878482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.762 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.878655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.878683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.878852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.878881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.879070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.879096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.879247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.879291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.879463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.879492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.879647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.879673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.879824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.879866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.880047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.880083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.880293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.880319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.880542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.880568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.880693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.880719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.880876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.880902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.881080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.881110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.881311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.881337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.881517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.881543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.881669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.881695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.881857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.881899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.882098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.882124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.882326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.882354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.882494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.882523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.882672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.882698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.882837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.882863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.883034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.883100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.883301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.883327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.883479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.883507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.883654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.883680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.883838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.883864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.884043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.884079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.884230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.884256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.884403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.884437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.884591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.884617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.884791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.884830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.884986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.885043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.885210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.885238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.885371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.763 [2024-07-26 12:25:52.885402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.763 qpair failed and we were unable to recover it. 00:24:59.763 [2024-07-26 12:25:52.885582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.885611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.885806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.885850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.886038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.886075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.886214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.886242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.886418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.886463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.886641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.886686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.886932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.886975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.887126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.887153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.887329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.887373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.887603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.887650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.887861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.887905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.888073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.888102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.888264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.888292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.888501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.888530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.888725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.888771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.888950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.888977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.889158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.889187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.889373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.889416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.889623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.889666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.889827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.889853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.890007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.890034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.890245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.890289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.890463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.890507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.890772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.890820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.890948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.890975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.891181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.891227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.891403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.891447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.891619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.891664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.891815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.891841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.891993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.892019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.892204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.892248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.892452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.892497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.892650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.892693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.764 [2024-07-26 12:25:52.892873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.764 [2024-07-26 12:25:52.892899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.764 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.893052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.893085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.893291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.893320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.893510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.893554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.893702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.893745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.893873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.893899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.894043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.894078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.894255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.894299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.894478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.894525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.894696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.894741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.894891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.894917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.895131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.895175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.895388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.895431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.895608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.895655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.895821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.895851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.896023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.896048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.896222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.896265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.896470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.896513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.896701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.896744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.896881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.896907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.897080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.897127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.897295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.897340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.897553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.897596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.897783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.897826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.897980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.898006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.898159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.898203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.898382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.898426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.898634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.898677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.898835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.898861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.899016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.899042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.899227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.899270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.899473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.899516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.899710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.899756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.899915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.899942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.900117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.900148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.900336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.900379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.900556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.900599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.900753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.900780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.900960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.765 [2024-07-26 12:25:52.900986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.765 qpair failed and we were unable to recover it. 00:24:59.765 [2024-07-26 12:25:52.901133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.901177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.901383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.901426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.901606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.901652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.901801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.901827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.901984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.902010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.902201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.902228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.902407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.902451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.902602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.902649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.902828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.902854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.903011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.903038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.903228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.903275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.903483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.903526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.903694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.903737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.903918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.903944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.904086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.904114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.904303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.904335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.904515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.904558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.904692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.904719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.904877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.904903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.905057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.905090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.905257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.905301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.905489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.905533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.905673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.905718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.905873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.905899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.906082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.906109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.906269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.906313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.906500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.906543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.906699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.906742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.906874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.906905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.907034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.907067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.907254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.907299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.907492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.907537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.907693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.907737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.907884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.907911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.908084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.908133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.908298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.908326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.908511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.908541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.908734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.908764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.766 qpair failed and we were unable to recover it. 00:24:59.766 [2024-07-26 12:25:52.908958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.766 [2024-07-26 12:25:52.908988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.909192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.909220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.909388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.909415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.909555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.909586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.909759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.909788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.909984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.910013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.910180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.910209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.910386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.910417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.910620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.910651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.910822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.910860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.911064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.911111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.911233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.911260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.911382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.911426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.911584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.911627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.911840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.911871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.912027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.912055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.912200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.912230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.912391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.912418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.912595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.912625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.912805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.912836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.913014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.913043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.913208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.913236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.913379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.913410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.913639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.913671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.913819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.913850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.914021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.914048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.914212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.914239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.914364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.914393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.914523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.914549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.914799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.914831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.914999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.915033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.915223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.915250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.915411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.915441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.915587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.915616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.915793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.915823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.915972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.916003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.916244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.916284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.916474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.916506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.916743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.916774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.916944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.916979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.917157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.767 [2024-07-26 12:25:52.917185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.767 qpair failed and we were unable to recover it. 00:24:59.767 [2024-07-26 12:25:52.917339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.917366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.917522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.917552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.917718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.917748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.917976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.918014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.918231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.918257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.918402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.918443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.918607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.918637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.918903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.918954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.919102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.919143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.919278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.919305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.919514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.919543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.919745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.919775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.919989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.920019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.920246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.920286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.920475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.920510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.920680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.920710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.920874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.920903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.921126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.921155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.921289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.921317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.921586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.921637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.921783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.921813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.922022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.922054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.922215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.922242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.922397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.922441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.922613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.922645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.922895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.922928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.923108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.923137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.923272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.923302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.923437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.923464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.923645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.923673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.923882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.923912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.924045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.924087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.924258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.924285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.924464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.924494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.924673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.924702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.924920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.924964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.925179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.925222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.925407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.925449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.925724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.925777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.925908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.925936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.926084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.926126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.768 qpair failed and we were unable to recover it. 00:24:59.768 [2024-07-26 12:25:52.926285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.768 [2024-07-26 12:25:52.926312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.926529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.926556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.926708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.926738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.926920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.926950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.927109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.927138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.927294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.927321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.927477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.927504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.927628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.927661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.927801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.927827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.928027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.928057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.928247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.928274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.928418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.928450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.928618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.928650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.928814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.928841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.928994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.929021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.929164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.929191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.929309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.929335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.929487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.929536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.929705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.929736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.929888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.929915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.930080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.930126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.930313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.930344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.930530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.930556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.930693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.930720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.930900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.930928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.931067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.931095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.931251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.931295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.931458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.931490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.931661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.931690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.931845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.931874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.932033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.932074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.932233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.932263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.932434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.932464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.932630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.932659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.932865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.932895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.933112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.933140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.933274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.769 [2024-07-26 12:25:52.933301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.769 qpair failed and we were unable to recover it. 00:24:59.769 [2024-07-26 12:25:52.933454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.933480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.933643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.933669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.933793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.933836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.934010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.934036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.934247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.934277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.934441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.934470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.934641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.934667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.934817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.934862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.935065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.935096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.935248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.935274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.935478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.935510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.935685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.935718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.935897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.935926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.936107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.936134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.936320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.936363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.936536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.936563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.936768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.936801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.936977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.937007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.937211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.937239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.937410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.937442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.937613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.937642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.937851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.937877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.938078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.938110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.938284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.938316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.938485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.938512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.938670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.938712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.938884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.938913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.939114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.939142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.939314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.939344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.939520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.939547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.939730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.939759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.939941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.939970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.940113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.940143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.940291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.940320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.940477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.940507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.940690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.940716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.940939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.940965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.941112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.941149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.941349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.941379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.941556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.941585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.770 [2024-07-26 12:25:52.941778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.770 [2024-07-26 12:25:52.941807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.770 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.941972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.942002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.942177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.942205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.942341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.942367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.942499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.942526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.942680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.942707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.942868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.942913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.943053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.943088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.943244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.943271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.943427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.943454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.943605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.943631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.943812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.943840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.944038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.944088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.944264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.944294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.944469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.944498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.944646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.944676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.944812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.944838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.945005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.945032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.945211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.945241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.945420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.945449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.945602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.945631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.945779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.945807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.946024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.946053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.946235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.946265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.946438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.946467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.946676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.946703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.946882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.946909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.947103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.947145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.947301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.947331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.947485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.947511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.947664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.947709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.947886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.947915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.948087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.948115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.948316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.948346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.948522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.948552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.948724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.948751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.948941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.948970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.949137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.949172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.949330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.949356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.949508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.949534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.771 [2024-07-26 12:25:52.949728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.771 [2024-07-26 12:25:52.949758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.771 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.949960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.949990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.950171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.950198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.950352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.950379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.950539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.950566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.950716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.950743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.950916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.950946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.951127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.951154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.951349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.951379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.951549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.951577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.951729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.951756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.951921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.951948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.952078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.952105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.952236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.952263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.952468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.952500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.952651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.952677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.952831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.952857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.953046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.953083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.953257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.953286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.953466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.953493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.953666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.953696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.953846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.953880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.954079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.954117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.954250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.954277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.954455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.954485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.954654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.954680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.954857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.954886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:24:59.772 [2024-07-26 12:25:52.955089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:59.772 [2024-07-26 12:25:52.955117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:24:59.772 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.955289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.955318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.955496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.955526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.955693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.955724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.955930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.955956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.956094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.956121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.956255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.956287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.956469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.956497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.956641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.956689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.956867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.956897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.957068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.957102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.957238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.957265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.957395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.957422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.957608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.957636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.957790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.957817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.957947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.957991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.958136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.958163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.958292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.958321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.958505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.958534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.958707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.958733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.958861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.958906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.959092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.959120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.959278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.959304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.055 [2024-07-26 12:25:52.959478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.055 [2024-07-26 12:25:52.959507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.055 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.959715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.959743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.959899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.959926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.960114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.960144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.960313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.960344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.960528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.960555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.960684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.960711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.960859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.960888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.961015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.961042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.961212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.961240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.961366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.961393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.961545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.961572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.961781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.961811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.962007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.962035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.962193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.962223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.962395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.962425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.962591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.962621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.962806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.962833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.962992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.963024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.963180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.963208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.963393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.963421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.963550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.963577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.963745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.963772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.963926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.963953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.964131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.964163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.964340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.964369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.964496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.964522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.964705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.964737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.964931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.964964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.965172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.965199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.965319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.965346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.965577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.965605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.965763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.965791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.965953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.056 [2024-07-26 12:25:52.965980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.056 qpair failed and we were unable to recover it. 00:25:00.056 [2024-07-26 12:25:52.966110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.966138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.966325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.966352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.966481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.966508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.966703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.966732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.966917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.966960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.967139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.967167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.967330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.967373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.967580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.967608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.967809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.967839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.968008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.968037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.968244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.968271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.968421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.968452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.968619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.968648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.968809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.968836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.969015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.969042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.969210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.969237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.969365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.969392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.969543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.969585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.969775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.969802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.969990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.970017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.970160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.970190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.970348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.970391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.970612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.970640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.970773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.970801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.970937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.970972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.971150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.971177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.971353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.971382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.971559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.971587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.971770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.971797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.971945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.971986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.972191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.972218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.972345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.972372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.972500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.972526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.972683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.057 [2024-07-26 12:25:52.972715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.057 qpair failed and we were unable to recover it. 00:25:00.057 [2024-07-26 12:25:52.972850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.972884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.973043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.973078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.973232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.973259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.973409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.973436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.973610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.973640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.973801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.973830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.973977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.974004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.974161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.974187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.974361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.974390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.974595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.974621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.974776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.974802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.974950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.974994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.975174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.975202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.975359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.975385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.975558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.975586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.975794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.975820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.975971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.976001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.976175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.976202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.976369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.976395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.976563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.976592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.976780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.976807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.976958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.976984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.977137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.977164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.977358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.977387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.977558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.977585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.977791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.977820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.977988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.978017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.978184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.978211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.978413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.978442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.978610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.978639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.978827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.978854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.979075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.979104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.979271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.979301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.979478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.979505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.979676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.058 [2024-07-26 12:25:52.979705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.058 qpair failed and we were unable to recover it. 00:25:00.058 [2024-07-26 12:25:52.979843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.979873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.980021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.980049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.980228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.980255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.980445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.980474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.980650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.980681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.980856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.980886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.981083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.981124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.981274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.981300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.981427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.981471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.981670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.981697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.981846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.981872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.982075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.982106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.982309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.982336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.982493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.982519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.982702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.982732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.982928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.982957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.983116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.983143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.983273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.983300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.983554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.983584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.983758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.983784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.983941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.983968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.984117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.984144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.984296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.984322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.984525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.984562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.984733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.984763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.984965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.984992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.985184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.985211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.985344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.985371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.985556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.985583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.985769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.985798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.986040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.986075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.986285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.986312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.986489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.986518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.986660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.986690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.986864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.059 [2024-07-26 12:25:52.986891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.059 qpair failed and we were unable to recover it. 00:25:00.059 [2024-07-26 12:25:52.987064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.987093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.987256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.987285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.987450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.987477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.987677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.987706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.987909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.987938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.988177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.988204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.988384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.988414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.988615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.988645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.988809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.988837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.989027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.989066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.989206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.989234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.989360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.989386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.989581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.989610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.989783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.989809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.989947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.989977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.990123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.990151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.990310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.990352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.990519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.990545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.990700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.990727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.990850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.990877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.991100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.991133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.991295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.991322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.991529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.991558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.991715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.991741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.991867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.991908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.992047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.992082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.992223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.992250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.992461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.992490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.992658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.060 [2024-07-26 12:25:52.992687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.060 qpair failed and we were unable to recover it. 00:25:00.060 [2024-07-26 12:25:52.992836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.992863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.993019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.993045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.993179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.993206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.993357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.993384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.993561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.993590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.993735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.993764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.993945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.993971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.994165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.994201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.994341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.994369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.994518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.994544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.994810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.994862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.995037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.995071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.995241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.995267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.995403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.995430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.995584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.995610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.995760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.995786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.995985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.996014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.996232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.996260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.996432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.996459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.996699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.996751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.996905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.996937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.997122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.997149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.997280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.997307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.997475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.997501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.997655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.997682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.997842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.997868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.998074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.998100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.998227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.998252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.998404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.998430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.998584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.998610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.998739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.998765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.998959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.998987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.999160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.999189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.999332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.999359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.999515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.999558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.061 qpair failed and we were unable to recover it. 00:25:00.061 [2024-07-26 12:25:52.999707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.061 [2024-07-26 12:25:52.999734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:52.999910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:52.999939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.000141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.000168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.000323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.000371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.000545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.000572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.000723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.000749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.000922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.000951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.001103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.001139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.001338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.001367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.001532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.001561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.001714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.001741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.001902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.001928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.002105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.002146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.002350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.002376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.002636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.002687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.002885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.002912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.003079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.003106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.003262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.003288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.003466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.003492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.003698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.003724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.003858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.003886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.004089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.004129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.004308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.004334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.004458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.004486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.004640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.004667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.004847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.004879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.005053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.005090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.005265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.005291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.005439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.005466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.005692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.005744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.005888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.005917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.006089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.006127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.006300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.006329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.006489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.006518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.006705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.006731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.006936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.062 [2024-07-26 12:25:53.006965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.062 qpair failed and we were unable to recover it. 00:25:00.062 [2024-07-26 12:25:53.007138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.007167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.007311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.007347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.007500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.007544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.007718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.007747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.007943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.007969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.008125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.008169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.008370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.008402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.008601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.008628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.008831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.008861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.009025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.009054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.009221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.009248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.009430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.009473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.009641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.009671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.009848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.009875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.010054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.010090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.010277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.010307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.010522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.010549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.010866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.010928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.011124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.011151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.011312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.011349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.011542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.011572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.011741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.011770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.011918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.011946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.012143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.012174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.012373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.012402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.012581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.012609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.012763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.012789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.012961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.012992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.013174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.013201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.013329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.013362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.013520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.013563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.013741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.013768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.013944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.013971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.014149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.014179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.014353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.063 [2024-07-26 12:25:53.014380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.063 qpair failed and we were unable to recover it. 00:25:00.063 [2024-07-26 12:25:53.014550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.014608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.014789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.014816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.015022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.015052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.015217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.015244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.015457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.015486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.015686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.015713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.015838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.015864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.016015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.016042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.016247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.016274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.016440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.016467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.016646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.016673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.016866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.016893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.017089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.017130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.017293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.017331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.017506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.017533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.017735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.017784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.017953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.017983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.018160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.018187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.018326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.018355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.018516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.018543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.018698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.018726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.018917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.018981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.019128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.019157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.019326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.019353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.019615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.019663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.019857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.019887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.020134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.020161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.020317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.020362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.020564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.020594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.020742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.020769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.020922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.020948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.021075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.021112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.021291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.021328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.021520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.021572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.021767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.064 [2024-07-26 12:25:53.021802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.064 qpair failed and we were unable to recover it. 00:25:00.064 [2024-07-26 12:25:53.022010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.022037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.022248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.022279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.022480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.022509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.022711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.022737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.022922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.022952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.023116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.023142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.023263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.023289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.023426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.023471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.023666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.023696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.023868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.023895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.024075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.024108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.024283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.024312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.024487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.024514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.024675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.024702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.024884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.024911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.025115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.025143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.025277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.025305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.025446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.025471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.025656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.025683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.025903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.025933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.026101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.026131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.026330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.026356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.026614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.026665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.026870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.026899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.027141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.027168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.027326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.027356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.027555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.027585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.027750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.027776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.027938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.027973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.028112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.028140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.065 [2024-07-26 12:25:53.028315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.065 [2024-07-26 12:25:53.028341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.065 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.028574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.028626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.028781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.028809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.029018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.029047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.029245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.029286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.029496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.029526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.029721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.029750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.029946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.029977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.030165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.030192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.030326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.030368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.030599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.030627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.030802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.030831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.031001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.031031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.031238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.031265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.031439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.031468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.031639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.031665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.031926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.031977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.032188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.032215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.032340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.032366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.032494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.032537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.032740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.032770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.032947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.032976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.033158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.033186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.033358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.033387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.033561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.033587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.033785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.033847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.034069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.034112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.034266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.034292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.034505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.034558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.034819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.034888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.035036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.035075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.035225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.035252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.035398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.035426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.035609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.035635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.035868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.035919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.036094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.066 [2024-07-26 12:25:53.036126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.066 qpair failed and we were unable to recover it. 00:25:00.066 [2024-07-26 12:25:53.036308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.036346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.036482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.036509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.036696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.036723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.036880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.036906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.037052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.037083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.037239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.037265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.037428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.037455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.037710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.037760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.037963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.037991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.038185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.038211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.038380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.038409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.038608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.038637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.038845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.038870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.039000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.039027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.039195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.039221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.039356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.039381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.039580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.039634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.039803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.039832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.040004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.040029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.040208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.040247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.040433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.040466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.040665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.040692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.040984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.041033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.041251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.041278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.041455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.041481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.041614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.041641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.041793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.041821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.041991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.042018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.042211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.042242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.042424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.042454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.042627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.042653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.042813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.042841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.043017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.043046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.043235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.043261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.067 [2024-07-26 12:25:53.043453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.067 [2024-07-26 12:25:53.043478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.067 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.043660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.043689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.043832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.043857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.044055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.044089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.044258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.044286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.044460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.044485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.044723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.044778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.044982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.045011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.045206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.045233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.045364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.045389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.045551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.045578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.045735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.045761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.045918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.045944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.046064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.046089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.046240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.046266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.046464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.046520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.046689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.046717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.046901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.046926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.047102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.047131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.047303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.047340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.047519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.047545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.047796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.047851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.048022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.048052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.048269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.048297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.048579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.048632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.048829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.048854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.049017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.049042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.049238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.049267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.049467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.049496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.049702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.049728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.050019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.050079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.050247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.050276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.050450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.050478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.050619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.050645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.050845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.050874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.068 [2024-07-26 12:25:53.051057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.068 [2024-07-26 12:25:53.051090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.068 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.051237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.051265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.051475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.051503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.051683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.051709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.051844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.051870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.052027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.052086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.052284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.052322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.052608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.052661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.052815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.052841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.052996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.053022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.053168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.053195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.053349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.053396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.053549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.053576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.053703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.053729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.053905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.053934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.054079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.054116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.054272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.054298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.054486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.054515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.054691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.054716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.054865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.054907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.055037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.055072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.055257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.055284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.055446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.055472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.055674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.055702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.055865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.055891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.056083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.056123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.056292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.056320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.056528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.056565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.056751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.056777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.056975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.057004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.057181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.057207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.057380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.057408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.057554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.057580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.057755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.057782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.057930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.057960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.058111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.069 [2024-07-26 12:25:53.058138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.069 qpair failed and we were unable to recover it. 00:25:00.069 [2024-07-26 12:25:53.058370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.058408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.058628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.058654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.058810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.058854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.059025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.059051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.059190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.059217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.059349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.059376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.059525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.059551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.059702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.059727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.059880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.059906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.060057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.060087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.060217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.060243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.060389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.060417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.060616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.060642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.060814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.060843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.061037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.061072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.061214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.061244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.061423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.061452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.061621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.061650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.061887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.061913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.062091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.062121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.062286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.062314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.062527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.062553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.062709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.062734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.062883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.062927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.063113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.063139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.063308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.063336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.063509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.063536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.063693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.063719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.063894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.063922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.064083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.064113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.070 [2024-07-26 12:25:53.064314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.070 [2024-07-26 12:25:53.064340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.070 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.064490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.064517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.064677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.064704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.064881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.064907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.065067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.065093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.065289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.065316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.065494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.065519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.065664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.065692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.065866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.065894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.066041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.066073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.066208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.066251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.066453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.066477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.066612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.066638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.066808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.066837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.066992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.067018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.067179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.067205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.067360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.067386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.067516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.067557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.067760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.067785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.067970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.067998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.068194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.068222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.068405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.068432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.068633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.068661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.068811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.068838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.068997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.069024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.069169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.069200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.069357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.069383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.069531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.069555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.069678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.069704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.069885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.069929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.070104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.070131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.070277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.070306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.070519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.070544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.070698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.070723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.070909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.070937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.071 [2024-07-26 12:25:53.071106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.071 [2024-07-26 12:25:53.071134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.071 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.071304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.071330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.071487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.071530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.071700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.071729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.071902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.071928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.072124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.072152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.072319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.072346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.072524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.072550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.072700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.072727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.072931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.072959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.073122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.073149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.073315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.073344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.073512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.073540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.073717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.073742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.073943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.073972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.074123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.074151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.074338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.074364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.074553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.074581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.074749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.074777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.074947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.074972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.075149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.075178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.075351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.075379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.075582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.075608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.075783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.075811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.075975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.076003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.076181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.076208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.076339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.076365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.076564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.076592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.076771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.076797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.076978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.077004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.077179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.077210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.077364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.077390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.077569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.077596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.077790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.077817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.077994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.078020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.078206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.072 [2024-07-26 12:25:53.078235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.072 qpair failed and we were unable to recover it. 00:25:00.072 [2024-07-26 12:25:53.078420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.078446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.078624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.078649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.078798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.078825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.078990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.079018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.079168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.079194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.079354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.079379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.079549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.079578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.079728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.079755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.079935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.079964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.080131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.080159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.080337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.080362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.080534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.080564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.080733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.080762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.080938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.080962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.081128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.081157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.081360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.081386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.081534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.081560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.081712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.081737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.081916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.081942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.082136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.082162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.082313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.082338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.082488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.082517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.082688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.082713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.082887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.082915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.083086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.083116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.083309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.083334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.083493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.083519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.083641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.083685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.083884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.083909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.084036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.084067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.084263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.084292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.084440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.084466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.084621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.084647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.084825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.084854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.085007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.085037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.073 qpair failed and we were unable to recover it. 00:25:00.073 [2024-07-26 12:25:53.085180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.073 [2024-07-26 12:25:53.085224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.085425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.085454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.085628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.085654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.085825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.085853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.086035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.086068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.086221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.086247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.086414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.086442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.086637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.086665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.086867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.086892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.087071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.087101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.087289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.087316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.087490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.087516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.087692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.087721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.088018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.088074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.088251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.088277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.088434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.088475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.088636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.088664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.088838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.088864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.089025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.089051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.089237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.089266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.089434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.089459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.089624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.089652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.089820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.089847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.090016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.090041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.090184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.090209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.090331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.090356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.090520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.090546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.090697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.090723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.090876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.090902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.091070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.091097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.091298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.091327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.091522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.091550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.091724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.091750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.091920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.091949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.092116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.092144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.074 [2024-07-26 12:25:53.092319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.074 [2024-07-26 12:25:53.092344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.074 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.092511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.092539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.092734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.092763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.092924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.092949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.093123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.093156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.093349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.093378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.093583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.093609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.093807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.093835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.093998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.094026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.094185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.094211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.094366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.094412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.094556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.094583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.094751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.094777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.094971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.094999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.095196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.095225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.095402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.095426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.095552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.095578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.095734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.095760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.095956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.095981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.096157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.096186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.096346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.096375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.096576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.096602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.096778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.096806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.096999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.097027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.097205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.097231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.097428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.097456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.097613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.097640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.097794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.097820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.098020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.098048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.098193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.098223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.098389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.098415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.075 qpair failed and we were unable to recover it. 00:25:00.075 [2024-07-26 12:25:53.098567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.075 [2024-07-26 12:25:53.098609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.098758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.098784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.098941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.098967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.099123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.099150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.099280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.099305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.099452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.099477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.099600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.099626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.099803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.099829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.100034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.100063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.100244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.100272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.100477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.100503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.100681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.100707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.100893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.100921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.101122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.101152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.101305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.101331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.101532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.101560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.101727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.101754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.101955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.101980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.102154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.102182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.102351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.102381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.102556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.102582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.102780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.102808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.103004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.103031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.103207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.103232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.103351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.103394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.103591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.103619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.103824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.103849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.104006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.104031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.104167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.104194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.104379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.104404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.104604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.104633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.104796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.104823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.104968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.104994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.105122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.105148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.105363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.105390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.105557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.076 [2024-07-26 12:25:53.105583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.076 qpair failed and we were unable to recover it. 00:25:00.076 [2024-07-26 12:25:53.105782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.105810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.105954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.105983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.106152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.106179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.106309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.106334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.106489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.106532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.106701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.106726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.106850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.106875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.107027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.107053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.107242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.107268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.107447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.107476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.107664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.107692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.107871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.107896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.108023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.108047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.108237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.108265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.108438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.108465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.108591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.108634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.108814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.108843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.109015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.109045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.109189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.109232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.109377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.109406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.109608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.109634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.109770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.109795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.109974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.109999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.110130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.110156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.110281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.110323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.110493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.110520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.110701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.110726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.110878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.110905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.111081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.111109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.111314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.111340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.111509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.111537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.111704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.111733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.111903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.111928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.112087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.112131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.112324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.077 [2024-07-26 12:25:53.112352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.077 qpair failed and we were unable to recover it. 00:25:00.077 [2024-07-26 12:25:53.112501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.112526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.112656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.112681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.112826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.112851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.113004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.113030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.113191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.113217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.113337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.113363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.113520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.113545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.113699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.113742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.113913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.113941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.114090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.114117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.114298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.114342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.114513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.114542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.114716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.114743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.114936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.114964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.115159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.115188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.115366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.115392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.115542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.115568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.115746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.115771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.115928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.115954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.116130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.116159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.116303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.116331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.116540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.116566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.116717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.116748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.116919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.116948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.117119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.117145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.117344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.117372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.117567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.117595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.117773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.117798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.117994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.118022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.118204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.118230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.118382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.118408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.118607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.118635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.118777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.118803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.118957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.118983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.119126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.119155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.119331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.078 [2024-07-26 12:25:53.119357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.078 qpair failed and we were unable to recover it. 00:25:00.078 [2024-07-26 12:25:53.119510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.119537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.119679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.119707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.119846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.119874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.120082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.120108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.120263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.120291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.120455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.120484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.120668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.120693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.120842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.120885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.121084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.121110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.121266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.121292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.121461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.121491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.121653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.121682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.121831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.121857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.122039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.122088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.122256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.122284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.122460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.122486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.122645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.122670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.122845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.122873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.123024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.123049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.123207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.123234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.123432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.123461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.123638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.123665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.123832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.123861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.124036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.124068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.124250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.124276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.124428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.124453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.124607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.124637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.124769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.124794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.124922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.124963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.125173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.125200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.125376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.125402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.125596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.125624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.125826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.125852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.079 qpair failed and we were unable to recover it. 00:25:00.079 [2024-07-26 12:25:53.126002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.079 [2024-07-26 12:25:53.126027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.126208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.126237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.126384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.126413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.126588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.126614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.126740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.126765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.126945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.126988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.127189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.127215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.127407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.127436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.127595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.127622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.127756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.127781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.127967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.127992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.128143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.128168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.128302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.128327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.128509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.128534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.128665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.128706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.128853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.128879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.129008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.129033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.129191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.129216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.129354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.129379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.129576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.129605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.129777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.129806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.129979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.130005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.130164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.130191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.130342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.130384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.130564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.130590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.130774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.130801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.131006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.131030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.131194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.131220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.131364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.131391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.131557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.131584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.131786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.131811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.132012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.080 [2024-07-26 12:25:53.132040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.080 qpair failed and we were unable to recover it. 00:25:00.080 [2024-07-26 12:25:53.132196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.132224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.132373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.132402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.132571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.132598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.132802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.132827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.132978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.133004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.133160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.133186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.133357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.133384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.133558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.133584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.133752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.133781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.133949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.133976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.134134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.134160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.134290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.134316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.134510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.134538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.134684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.134709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.134859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.134901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.135119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.135146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.135322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.135348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.135488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.135515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.135684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.135713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.135867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.135892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.136070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.136096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.136251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.136276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.136422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.136448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.136618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.136647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.136814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.136841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.137048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.137078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.137232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.137257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.137461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.137489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.137677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.137703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.137898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.137926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.138130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.138159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.138330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.138356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.138533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.138577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.138755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.138781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.138934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.138960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.081 [2024-07-26 12:25:53.139112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.081 [2024-07-26 12:25:53.139138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.081 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.139292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.139317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.139500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.139526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.139694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.139722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.139859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.139887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.140091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.140117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.140266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.140291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.140484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.140511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.140665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.140690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.140859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.140887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.141035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.141070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.141210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.141235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.141381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.141406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.141589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.141617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.141793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.141818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.142012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.142040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.142242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.142268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.142429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.142455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.142656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.142684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.142857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.142885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.143075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.143102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.143227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.143252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.143468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.143493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.143652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.143677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.143848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.143876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.144046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.144078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.144250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.144275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.144482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.144511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.144671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.144698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.144847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.144872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.145004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.145029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.145212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.145236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.145391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.145416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.145569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.145600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.145723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.145749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.145915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.145942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.082 qpair failed and we were unable to recover it. 00:25:00.082 [2024-07-26 12:25:53.146075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.082 [2024-07-26 12:25:53.146101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.146321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.146346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.146470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.146495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.146688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.146716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.146885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.146913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.147062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.147088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.147265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.147309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.147476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.147505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.147677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.147703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.147824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.147849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.147999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.148026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.148208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.148235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.148434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.148462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.148669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.148694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.148842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.148867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.149048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.149098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.149267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.149295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.149445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.149470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.149647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.149690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.149849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.149877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.150023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.150048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.150240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.150268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.150438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.150466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.150639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.150664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.150840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.150869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.151071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.151099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.151292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.151317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.151495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.151522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.151717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.151744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.151910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.151936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.152133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.152162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.152334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.152361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.152499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.152525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.152647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.152672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.152856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.152884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.083 qpair failed and we were unable to recover it. 00:25:00.083 [2024-07-26 12:25:53.153062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.083 [2024-07-26 12:25:53.153088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.153244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.153269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.153422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.153469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.153642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.153668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.153822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.153849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.154039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.154072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.154261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.154286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.154440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.154483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.154643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.154672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.154823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.154848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.155018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.155054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.155241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.155279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.155452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.155478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.155629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.155653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.155856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.155884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.156041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.156090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.156267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.156297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.156463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.156492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.156649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.156683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.156840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.156894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.157099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.157129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.157281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.157306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.157439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.157468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.157630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.157675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.157819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.157845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.157978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.158006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.158161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.158188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.158340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.158366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.158529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.158557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.158736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.158773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.158962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.084 [2024-07-26 12:25:53.158988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.084 qpair failed and we were unable to recover it. 00:25:00.084 [2024-07-26 12:25:53.159139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.159169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.159310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.159345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.159514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.159540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.159679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.159706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.159918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.159952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.160115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.160141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.160271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.160297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.160455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.160481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.160664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.160690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.160860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.160888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.161090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.161120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.161294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.161331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.161498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.161528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.161661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.161689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.161868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.161895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.162075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.162104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.162286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.162312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.162465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.162490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.162691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.162720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.162884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.162912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.163066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.163092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.163224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.163251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.163435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.163461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.163639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.163666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.163861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.163890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.164079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.164108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.164256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.164281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.164459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.164494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.164704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.164731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.164914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.164939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.165133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.165163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.165333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.165361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.165547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.165572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.165758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.165788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.165963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.165993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.085 [2024-07-26 12:25:53.166195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.085 [2024-07-26 12:25:53.166230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.085 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.166412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.166441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.166635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.166663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.166811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.166837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.166991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.167035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.167188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.167213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.167345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.167373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.167507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.167551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.167771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.167797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.167925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.167952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.168123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.168150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.168270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.168295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.168460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.168485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.168670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.168701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.168870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.168899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.169087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.169113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.169251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.169289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.169432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.169457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.169611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.169637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.169810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.169839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.169974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.170001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.170184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.170217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.170373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.170415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.170582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.170611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.170766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.170791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.170991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.171020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.171193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.171233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.171445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.171471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.171635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.171662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.171834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.171870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.172054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.172095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.172296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.172325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.172497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.172525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.172727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.172754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.172922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.172951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.173121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.086 [2024-07-26 12:25:53.173152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.086 qpair failed and we were unable to recover it. 00:25:00.086 [2024-07-26 12:25:53.173299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.173330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.173459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.173501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.173666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.173696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.173841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.173872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.174029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.174056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.174252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.174288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.174449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.174482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.174710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.174737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.174863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.174889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.175048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.175079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.175226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.175263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.175407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.175437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.175591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.175618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.175756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.175782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.175971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.176000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.176158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.176185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.176358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.176402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.176553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.176581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.176760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.176790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.176977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.177006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.177169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.177201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.177331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.177365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.177546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.177575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.177746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.177775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.177947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.177972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.178133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.178160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.178358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.178388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.178538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.178564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.178691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.178733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.178907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.178944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.179125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.179152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.179314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.179354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.179525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.179553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.179710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.179737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.179936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.087 [2024-07-26 12:25:53.179965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.087 qpair failed and we were unable to recover it. 00:25:00.087 [2024-07-26 12:25:53.180136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.180171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.180338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.180367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.180518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.180561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.180696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.180724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.180874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.180901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.181079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.181109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.181239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.181268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.181449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.181484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.181710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.181736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.181892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.181918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.182079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.182107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.182317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.182347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.182515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.182544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.182723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.182756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.182915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.182948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.183115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.183159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.183328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.183355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.183487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.183513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.183660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.183688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.183817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.183844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.184018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.184047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.184216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.184248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.184394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.184420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.184546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.184571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.184792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.184821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.184969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.185002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.185160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.185186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.185385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.185416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.185612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.185640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.185808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.185836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.185999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.186028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.186190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.186217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.186387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.186414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.186613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.186640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.186823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.186856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.088 [2024-07-26 12:25:53.186991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.088 [2024-07-26 12:25:53.187017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.088 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.187153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.187180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.187353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.187380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.187556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.187593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.187794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.187823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.188018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.188051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.188200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.188238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.188387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.188415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.188568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.188594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.188753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.188797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.188949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.188978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.189151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.189178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.189335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.189379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.189569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.189598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.189782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.189807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.189980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.190018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.190247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.190273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.190409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.190436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.190617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.190647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.190815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.190843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.191008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.191034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.191191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.191232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.191397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.191426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.191578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.191605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.191758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.191784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.191936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.191963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.192131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.192159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.089 [2024-07-26 12:25:53.192339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.089 [2024-07-26 12:25:53.192368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.089 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.192519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.192548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.192735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.192762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.192900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.192930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.193099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.193142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.193318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.193343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.193544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.193574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.193743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.193773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.193941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.193967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.194120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.194150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.194298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.194331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.194532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.194558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.194735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.090 [2024-07-26 12:25:53.194764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.090 qpair failed and we were unable to recover it. 00:25:00.090 [2024-07-26 12:25:53.194910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.194939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.195115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.195142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.195266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.195309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.195501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.195531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.195700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.195726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.195907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.195934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.196111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.196142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.196303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.196330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.196495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.196524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.196695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.196723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.196910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.196936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.197163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.197208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.197418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.197450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.197625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.197652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.197827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.197891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.198094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.198126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.198269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.198295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.198432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.198477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.198676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.198705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.198855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.198883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.199041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.199088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.199253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.199281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.199456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.199488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.199621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.199649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.199821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.199847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.200032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.200081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.200247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.200275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.200420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.200465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.200646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.200681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.200833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.200861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.201025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.201064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.201210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.201236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.201398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.201424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.201581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.201607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.201787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.091 [2024-07-26 12:25:53.201821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.091 qpair failed and we were unable to recover it. 00:25:00.091 [2024-07-26 12:25:53.201986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.202015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.202226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.202253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.202414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.202440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.202638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.202666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.202877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.202909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.203068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.203095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.203271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.203300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.203511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.203541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.203705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.203732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.203894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.203920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.204054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.204094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.204234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.204269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.204404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.204448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.204612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.204640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.204832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.204859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.204997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.205027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.205218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.205245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.205379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.205406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.205582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.205612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.205763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.205792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.205969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.205996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.206148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.206184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.206360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.206390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.206586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.206613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.206791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.206821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.206988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.207017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.207202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.207231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.207398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.207427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.207623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.207652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.207827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.207854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.207989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.208018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.208232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.208261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.208438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.208464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.208671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.208702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.208870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.208899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.092 [2024-07-26 12:25:53.209080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.092 [2024-07-26 12:25:53.209112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.092 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.209278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.209305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.209517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.209547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.209725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.209751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.209886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.209913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.210089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.210136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.210294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.210320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.210447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.210476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.210655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.210684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.210832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.210859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.210985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.211011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.211202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.211231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.211390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.211420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.211585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.211646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.211834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.211860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.212020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.212048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.212226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.212256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.212399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.212430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.212608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.212636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.212776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.212810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.212960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.212986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.213163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.213190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.213370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.213400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.213570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.213600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.213771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.213798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.213921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.213966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.214143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.214171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.214327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.214354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.214488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.214517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.214722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.214752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.214912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.214939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.215095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.215125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.215338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.215367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.215570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.215599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.215730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.215757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.093 qpair failed and we were unable to recover it. 00:25:00.093 [2024-07-26 12:25:53.215937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.093 [2024-07-26 12:25:53.215964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.216129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.216156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.216315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.216357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.216543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.216573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.216747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.216777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.216978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.217013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.217203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.217233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.217382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.217409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.217542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.217568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.217691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.217720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.217843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.217870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.218026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.218077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.218259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.218286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.218448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.218475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.218650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.218680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.218869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.218899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.219047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.219080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.219253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.219282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.219447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.219476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.219658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.219685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.219844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.219870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.220028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.220054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.220197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.220227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.220387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.220430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.220584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.220610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.220765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.220793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.220951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.220978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.221147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.221178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.221331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.221361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.221517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.221560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.221732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.221761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.221937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.221965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.222700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.222741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.222932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.222960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.223151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.223179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.223314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.094 [2024-07-26 12:25:53.223339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.094 qpair failed and we were unable to recover it. 00:25:00.094 [2024-07-26 12:25:53.223463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.223490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.223641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.223667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.223823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.223849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.224022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.224051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.224240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.224267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.224426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.224452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.224604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.224633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.224774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.224800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.224920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.224948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.225123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.225157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.225318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.225346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.225496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.225521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.225720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.225748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.225913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.225943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.226124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.226151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.226308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.226335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.226466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.226493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.226650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.226677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.226836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.226862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.227017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.227043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.227201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.227228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.227360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.227391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.227535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.227561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.227748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.227774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.227897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.227923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.228053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.228101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.228230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.228256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.228385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.228411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.228575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.228600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.228752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.228778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.095 [2024-07-26 12:25:53.228924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.095 [2024-07-26 12:25:53.228952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.095 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.229113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.229139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.229273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.229301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.229429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.229454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.229591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.229621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.229802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.229830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.229980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.230009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.230176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.230204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.230359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.230385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.230544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.230570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.230731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.230759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.230931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.230961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.231136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.231162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.231302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.231330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.231484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.231510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.231663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.231689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.231846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.231872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.232064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.232090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.232260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.232286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.232452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.232483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.232645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.232671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.232852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.232878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.233017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.233044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.233218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.233245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.233408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.233438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.233605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.233632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.233789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.233815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.233983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.234012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.234183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.234210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.234389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.234416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.234552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.234577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.234745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.234772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.234946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.234978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.235149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.235177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.235353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.235384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.235569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.096 [2024-07-26 12:25:53.235598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.096 qpair failed and we were unable to recover it. 00:25:00.096 [2024-07-26 12:25:53.235768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.235797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.235949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.235976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.236169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.236197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.236345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.236376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.236546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.236576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.236721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.236747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.236901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.236928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.237090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.237118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.237262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.237290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.237455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.237481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.237639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.237682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.237836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.237863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.238014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.238041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.238191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.238218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.238414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.238442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.238637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.238665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.238841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.238867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.239024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.239050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.239210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.239241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.239451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.239480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.239697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.239727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.239868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.239894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.240049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.240104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.240283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.240378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.240582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.240611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.240817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.240864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.241031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.241067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.241199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.241242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.241400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.241429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.241623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.241652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.241795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.241820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.241975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.242003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.242176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.242206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.242435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.242465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.242635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.242664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.097 qpair failed and we were unable to recover it. 00:25:00.097 [2024-07-26 12:25:53.242834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.097 [2024-07-26 12:25:53.242860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.243010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.243035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.243227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.243256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.243467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.243496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.243661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.243689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.243844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.243870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.244047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.244093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.244222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.244249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.244370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.244396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.244552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.244578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.244734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.244760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.244941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.244967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.245131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.245158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.245315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.245341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.245528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.245554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.245704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.245734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.245891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.245917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.246076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.246103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.246224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.246250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.246372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.246397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.246525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.246551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.246704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.246731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.246914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.246939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.247105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.247131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.247291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.247316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.247463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.247489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.247606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.247631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.247812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.247837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.248025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.248066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.248207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.248233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.248383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.248408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.248589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.248615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.248745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.248771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.248892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.248917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.249071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.098 [2024-07-26 12:25:53.249097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.098 qpair failed and we were unable to recover it. 00:25:00.098 [2024-07-26 12:25:53.249279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.249305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.249458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.249484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.249658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.249683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.249831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.249857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.250000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.250026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.250183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.250209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.250338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.250376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.250503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.250529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.250707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.250733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.250856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.250882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.251035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.251067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.251202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.251228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.251365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.251392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.251520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.251547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.251698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.251724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.251861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.251887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.252039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.252088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.252242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.252268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.252408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.252434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.252612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.252638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.252816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.252845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.253000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.253026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.253205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.253243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.253411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.253439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.253603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.253629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.253787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.253813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.253941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.253967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.254098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.254125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.254277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.254303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.254456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.254481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.254615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.254642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.254824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.254850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.254978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.255003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.255159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.255185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.255317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.255344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.255499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.255525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.255681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.255707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.255830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.099 [2024-07-26 12:25:53.255856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.099 qpair failed and we were unable to recover it. 00:25:00.099 [2024-07-26 12:25:53.256008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.256034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.256198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.256225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.256339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.256365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.256521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.256546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.256704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.256730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.256913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.256940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.257097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.257123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.257259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.257284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.257419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.257445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.257594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.257619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.257784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.257810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.257984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.258012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.258163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.258189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.258312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.258338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.258488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.258514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.258663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.258688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.258868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.258893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.259020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.259046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.259212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.259238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.259370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.259396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.259542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.259567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.259693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.259718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.259872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.259901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.260083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.260108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.260228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.260253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.260440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.260465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.260590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.260615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.260768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.260793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.260975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.260999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.261165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.261191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.261338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.261362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.261544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.100 [2024-07-26 12:25:53.261568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.100 qpair failed and we were unable to recover it. 00:25:00.100 [2024-07-26 12:25:53.261714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.261737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.261887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.261911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.262068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.262093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.262224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.262248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.262508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.262532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.262712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.262737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.262917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.262945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.263114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.263139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.263300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.263325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.263480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.263504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.263654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.263678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.263827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.263852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.263992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.264031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.264196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.264223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.264387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.264430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.264550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.264575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.264766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.264791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.264933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.264960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.265127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.265154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.265372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.265398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.265566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.265592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.265756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.265781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.265938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.265963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.266148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.266175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.266317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.266342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.266494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.266519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.266703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.266729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.266888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.266915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.267071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.267097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.267255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.267279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.267412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.267440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.267570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.267595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.267751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.267777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.268024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.268051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.268189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.268215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.268372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.268398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.268523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.268548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.268681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.101 [2024-07-26 12:25:53.268707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.101 qpair failed and we were unable to recover it. 00:25:00.101 [2024-07-26 12:25:53.268867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.268894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.269049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.269083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.269205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.269232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.269394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.269420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.269570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.269596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.269777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.269802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.269930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.269956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.270113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.270139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.270296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.270321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.270485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.270511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.270627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.270651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.270805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.270829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.270959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.270985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.271119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.271142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.271299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.271324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.271487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.271514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.271669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.271694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.271850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.271874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.272028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.272178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.272356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.272537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.272686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.272873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.272992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.273016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.273168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.273195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.273344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.273368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.273551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.273575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.273758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.273783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.273926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.273955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.274124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.274150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.274270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.274295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.274444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.274472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.274630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.274655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.274800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.274825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.274982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.275007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.275163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.275189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.275344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.275368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.275519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.275544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.275694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.275718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.102 qpair failed and we were unable to recover it. 00:25:00.102 [2024-07-26 12:25:53.275912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.102 [2024-07-26 12:25:53.275951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.276084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.276111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.276296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.276323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.276471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.276497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.276679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.276704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.276854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.276880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.277070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.277114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.277240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.277265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.277419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.277445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.277625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.277650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.277779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.277805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.277964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.277990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.278116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.278142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.278298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.278325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.278486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.278512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.278634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.278660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.278814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.278840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.279006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.279033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.279190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.279216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.279411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.279437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.279566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.279591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.279778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.279803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.279980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.280008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.280203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.280229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.280392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.280417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.280571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.280597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.280732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.280758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.280930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.280959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.281140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.281166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.281320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.281345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.281496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.281521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.281673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.281699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.281857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.281887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.282040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.282088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.282235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.282262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.282457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.282484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.282637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.282664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.282819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.282844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.282979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.283005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.283136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.283162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.283350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.283375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.103 [2024-07-26 12:25:53.283534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.103 [2024-07-26 12:25:53.283560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.103 qpair failed and we were unable to recover it. 00:25:00.104 [2024-07-26 12:25:53.283744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.104 [2024-07-26 12:25:53.283781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.104 qpair failed and we were unable to recover it. 00:25:00.104 [2024-07-26 12:25:53.283983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.104 [2024-07-26 12:25:53.284012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.104 qpair failed and we were unable to recover it. 00:25:00.104 [2024-07-26 12:25:53.284160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.104 [2024-07-26 12:25:53.284186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.104 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.284341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.284370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.284533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.284558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.284718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.284747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.284899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.284944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.285099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.285124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.285249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.285275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.285437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.285462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.285611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.285635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.285788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.285812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.285969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.285994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.286145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.286170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.374 qpair failed and we were unable to recover it. 00:25:00.374 [2024-07-26 12:25:53.286358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.374 [2024-07-26 12:25:53.286382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.286534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.286559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.286705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.286730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.286892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.286918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.287068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.287094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.287248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.287273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.287464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.287489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.287602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.287626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.287772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.287797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.287978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.288141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.288291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.288453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.288632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.288780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.288965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.288990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.289146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.289175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.289350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.289378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.289571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.289618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.289789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.289818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.289975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.290003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.290208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.290239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.290405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.290432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.290599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.290626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.290791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.290819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.290975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.291004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.291199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.291227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.291418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.291450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.291635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.291662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.291872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.291897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.292121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.292146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.292300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.292324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.292476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.292499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.292653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.292678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.292835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.292860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.293011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.293035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.293177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.293202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.293330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.293355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.293534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.293558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.293740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.293764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.293912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.293937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.294068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.294092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.294248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.294273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.294421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.294460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.294613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.294640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.294776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.375 [2024-07-26 12:25:53.294804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.375 qpair failed and we were unable to recover it. 00:25:00.375 [2024-07-26 12:25:53.294955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.294981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.295111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.295138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.295301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.295327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.295477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.295503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.295656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.295682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.295839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.295865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.296045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.296076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.296231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.296257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.296422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.296448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.296605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.296630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.296783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.296814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.297014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.297043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.297205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.297232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.297411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.297436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.297563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.297587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.297741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.297765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.297948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.297973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.298129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.298154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.298306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.298331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.298480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.298504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.298663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.298687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.298818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.298842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.298996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.299021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.299179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.299204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.299335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.299368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.299547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.299572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.299730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.299755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.299940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.299968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.300141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.300165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.300323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.300348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.300513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.300538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.300688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.300712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.300869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.300894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.301056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.301085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.301210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.301235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.301359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.301384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.301510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.301535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.301693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.301718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.301900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.301925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.302114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.302140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.302261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.302287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.302481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.302506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.302626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.302651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.302804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.302828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.302979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.303003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.303139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.303165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.376 [2024-07-26 12:25:53.303341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.376 [2024-07-26 12:25:53.303375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.376 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.303557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.303582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.303707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.303733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.303908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.303933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.304052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.304086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.304271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.304295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.304430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.304454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.304636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.304661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.304815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.304839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.304989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.305013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.305172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.305198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.305348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.305380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.305500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.305524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.305680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.305706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.305883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.305909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.306056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.306088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.306267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.306292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.306486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.306511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.306674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.306700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.306844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.306872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.307071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.307115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.307243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.307269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.307304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21cd230 (9): Bad file descriptor 00:25:00.377 [2024-07-26 12:25:53.307556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.307593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.307776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.307828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.307987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.308014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.308188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.308216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.308360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.308390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.308572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.308615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.308795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.308839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.308995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.309022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.309218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.309265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.309425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.309469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.309642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.309685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.309866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.309892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.310047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.310081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.310285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.310333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.310524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.310553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.310776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.310818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.310974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.311000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.311192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.311238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.311423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.311468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.311642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.311685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.311859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.311902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.312056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.312086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.312299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.312328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.312537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.312581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.312792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.312835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.312993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.313019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.313211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.313259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.313408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.377 [2024-07-26 12:25:53.313452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.377 qpair failed and we were unable to recover it. 00:25:00.377 [2024-07-26 12:25:53.313626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.313669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.313868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.313897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.314126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.314152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.314327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.314379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.314589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.314632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.314811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.314837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.314989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.315016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.315208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.315257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.315400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.315444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.315622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.315666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.315846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.315872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.316021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.316065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.316245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.316289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.316472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.316516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.316670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.316713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.316841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.316868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.316997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.317024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.317221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.317265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.317463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.317490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.317672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.317698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.317848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.317874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.318009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.318035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.318259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.318303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.318521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.318564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.318771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.318814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.318990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.319015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.319192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.319236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.319406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.319448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.319626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.319670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.319824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.319850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.319998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.320025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.320212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.320241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.320458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.320501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.320707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.320750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.320933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.320959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.321145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.321189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.321340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.321369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.321556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.321582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.321759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.321801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.321932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.321959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.322128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.322172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.322301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.322327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.378 [2024-07-26 12:25:53.322490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.378 [2024-07-26 12:25:53.322516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.378 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.322691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.322716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.322836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.322861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.322989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.323016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.323199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.323244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.323423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.323470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.323677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.323720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.323861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.323887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.324043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.324081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.324262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.324306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.324491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.324535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.324739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.324782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.324936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.324962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.325146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.325190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.325346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.325396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.325563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.325606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.325765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.325791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.325945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.325971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.326179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.326223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.326431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.326458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.326639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.326683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.326841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.326866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.327608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.327638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.327849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.327893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.328077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.328105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.328287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.328331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.328538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.328582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.328764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.328808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.328960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.328986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.329158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.329202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.329352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.329395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.329554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.329601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.329732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.329759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.329937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.329963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.330141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.330185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.330369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.330414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.331155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.331187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.331383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.331427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.331890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.331918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.332118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.332146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.332277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.332304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.332472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.332499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.332636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.332662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.332818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.332844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.332972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.332998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.333175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.333225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.333407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.333451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.333655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.333701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.333893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.333919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.334079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.379 [2024-07-26 12:25:53.334106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.379 qpair failed and we were unable to recover it. 00:25:00.379 [2024-07-26 12:25:53.334306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.334336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.334551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.334595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.334729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.334755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.335342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.335374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.335564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.335607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.335786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.335813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.335974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.336000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.336188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.336231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.336400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.336449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.336630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.336674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.336854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.336881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.337038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.337082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.337261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.337304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.337473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.337516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.337723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.337767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.337959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.337985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.338159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.338202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.338428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.338471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.338609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.338651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.338830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.338874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.339051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.339082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.339230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.339256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.339408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.339451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.339631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.339674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.339826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.339853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.340010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.340037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.340199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.340242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.340413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.340456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.340609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.340636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.340784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.340810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.340965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.340991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.341165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.341209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.341389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.341434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.341606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.341651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.341780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.341806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.341951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.341981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.342149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.342194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.342371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.342418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.342595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.342640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.342835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.342875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.343033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.343081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.343270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.343299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.343554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.343581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.343809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.343838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.343984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.344009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.344164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.344191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.344362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.344391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.344669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.344734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.344979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.345005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.345175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.345201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.380 [2024-07-26 12:25:53.345336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.380 [2024-07-26 12:25:53.345370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.380 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.345663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.345720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.345948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.346002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.346197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.346224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.346379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.346405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.346648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.346676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.346961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.347022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.347193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.347220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.347353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.347407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.347716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.347776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.347989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.348018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.348209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.348235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.348369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.348416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.348703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.348765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.348969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.348998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.349182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.349209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.349389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.349416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.349570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.349607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.349749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.349778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.349920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.349949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.350134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.350160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.350357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.350385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.350575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.350604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.350797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.350825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.350993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.351022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.351245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.351286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.351486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.351514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.351720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.351762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.352050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.352120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.352279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.352305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.352483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.352527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.352826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.352882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.353047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.353084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.353260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.353304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.353519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.353560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.353766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.353809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.353987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.354012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.354195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.354222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.354372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.354414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.354621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.354669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.354850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.354876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.355029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.355074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.355253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.355296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.355468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.355511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.355715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.355758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.355913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.355939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.356143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.356187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.356391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.356434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.356633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.381 [2024-07-26 12:25:53.356661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.381 qpair failed and we were unable to recover it. 00:25:00.381 [2024-07-26 12:25:53.356833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.356860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.356989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.357015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.357193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.357237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.357444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.357488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.357649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.357675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.357835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.357861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.358018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.358056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.358271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.358323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.358478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.358527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.358705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.358750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.358939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.358965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.359145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.359190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.359373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.359416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.359600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.359644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.359777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.359803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.359922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.359948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.360117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.360161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.360339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.360384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.360580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.360623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.360805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.360836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.361000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.361029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.361299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.361328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.361533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.361562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.361707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.361736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.361912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.361938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.362094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.362121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.362272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.362316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.362501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.362544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.362724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.362767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.362915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.362941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.363134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.363192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.363372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.363418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.363598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.363641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.363826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.363852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.363976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.364002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.364163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.364190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.364309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.364335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.364489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.364515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.364667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.364693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.364846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.364874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.365039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.365080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.365241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.365269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.365465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.365494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.365627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.365655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.365832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.365860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.366037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.366081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.366273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.366300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.366455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.366497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.366677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.366721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.366900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.366942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.367128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.367155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.367308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.367337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.367554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.367596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.382 qpair failed and we were unable to recover it. 00:25:00.382 [2024-07-26 12:25:53.367774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.382 [2024-07-26 12:25:53.367817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.367969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.367996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.368207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.368237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.368457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.368501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.368720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.368770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.368898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.368924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.369104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.369148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.369322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.369375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.369577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.369620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.369796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.369823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.370002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.370028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.370186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.370229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.370441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.370484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.370652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.370696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.370869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.370895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.371071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.371098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.371274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.371317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.371500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.371543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.371754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.371798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.371954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.371980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.372199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.372243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.372426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.372469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.372645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.372692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.372872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.372898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.373053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.373085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.373263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.373307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.373459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.373502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.373673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.373715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.373913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.373939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.374069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.374095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.374270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.374315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.374471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.374503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.374667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.374695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.374866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.374894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.375064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.375091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.375241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.375269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.375466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.375494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.375674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.375720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.375930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.375959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.376124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.376150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.376307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.376335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.376482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.376511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.376707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.376736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.376933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.376978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.377135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.377166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.377325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.377379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.377557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.377601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.377802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.377846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.378024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.378067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.378226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.378253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.378441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.378484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.378667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.378711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.378863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.378889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.383 [2024-07-26 12:25:53.379069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.383 [2024-07-26 12:25:53.379095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.383 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.379240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.379284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.379453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.379496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.379672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.379714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.379848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.379874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.380071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.380098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.380277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.380325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.380516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.380545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.380760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.380809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.380946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.380972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.381121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.381149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.381352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.381395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.381566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.381609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.381771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.381814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.381975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.382001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.382173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.382216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.382420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.382464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.382670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.382719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.382871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.382897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.383048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.383083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.383254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.383283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.383453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.383481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.383646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.383674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.383907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.383936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.384082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.384124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.384303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.384328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.384676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.384731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.384924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.384952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.385166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.385193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.385346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.385377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.385578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.385606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.385776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.385805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.385988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.386014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.386200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.386227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.386402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.386431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.386622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.386650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.386815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.386843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.386995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.387020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.387232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.387272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.387462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.387489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.387637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.387682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.387838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.387882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.388037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.388081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.388204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.388230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.388425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.388452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.388639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.388665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.388828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.388854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.389030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.389065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.389198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.389224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.389384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.389410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.389584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.389612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.389776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.389804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.389951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.389977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.390154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.390181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.384 [2024-07-26 12:25:53.390307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.384 [2024-07-26 12:25:53.390349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.384 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.390522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.390550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.390710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.390738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.390902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.390930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.391099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.391143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.391327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.391364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.391518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.391548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.391694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.391723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.391987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.392016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.392204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.392230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.392359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.392402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.392656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.392706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.392876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.392904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.393142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.393168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.393324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.393362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.393538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.393567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.393739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.393768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.393935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.393963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.394136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.394167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.394294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.394320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.394532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.394560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.394730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.394759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.394900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.394928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.395117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.395157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.395312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.395373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.395547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.395592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.395796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.395840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.395963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.395990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.396189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.396233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.396490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.396539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.396747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.396790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.396919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.396944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.397127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.397157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.397362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.397391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.397568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.397596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.397802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.397831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.398004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.398030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.398175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.398201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.398359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.398388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.398549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.398577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.398768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.398796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.398964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.398990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.399132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.399158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.399314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.399356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.399640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.399698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.399866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.399899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.400044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.400082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.400257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.400282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.400482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.400511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.400652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.400682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.400876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.385 [2024-07-26 12:25:53.400937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.385 qpair failed and we were unable to recover it. 00:25:00.385 [2024-07-26 12:25:53.401138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.401164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.401323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.401367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.401589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.401617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.401761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.401789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.401957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.401986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.402163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.402189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.402349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.402378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.402543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.402571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.402768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.402797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.402991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.403019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.403225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.403251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.403420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.403446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.403597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.403624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.403788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.403816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.404012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.404041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.404208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.404234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.404363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.404390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.404610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.404640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.404794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.404823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.404960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.404989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.405149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.405177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.405332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.405385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.405548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.405606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.405813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.405844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.406009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.406038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.406254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.406281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.406479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.406508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.406646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.406677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.406828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.406857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.407027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.407071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.407222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.407248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.407433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.407464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.407626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.407657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.407821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.407850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.407980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.408010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.408224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.408263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.408429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.408457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.408701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.408751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.408912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.408956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.409142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.409169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.409345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.409371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.409545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.409571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.409691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.409716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.409893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.409920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.410075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.410102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.410254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.410299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.410474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.410517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.410702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.410728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.410908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.410934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.411136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.411180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.411386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.411429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.411609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.411653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.411807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.411834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.411987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.412013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.412189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.412235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.412441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.412471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.386 qpair failed and we were unable to recover it. 00:25:00.386 [2024-07-26 12:25:53.412706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.386 [2024-07-26 12:25:53.412749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.412917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.412944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.413068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.413095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.413275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.413303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.413507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.413551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.413693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.413742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.413872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.413899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.414081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.414108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.414257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.414300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.414473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.414516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.414660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.414704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.414855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.414881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.415040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.415071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.415219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.415264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.415443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.415489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.415636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.415680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.415803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.415829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.415996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.416022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.416205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.416249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.416420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.416465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.416644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.416687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.416841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.416867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.416998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.417024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.417195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.417239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.417377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.417420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.417609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.417636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.417769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.417796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.417978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.418004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.418152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.418198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.418400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.418443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.418590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.418634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.418753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.418779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.418961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.418988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.419140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.419185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.419331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.419360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.419519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.419563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.419695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.419721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.419911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.419937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.420072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.420099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.420276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.420320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.420525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.420570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.420749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.420775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.420899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.420925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.421124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.421153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.421336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.421365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.421533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.421581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.421716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.421742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.421867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.421893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.422047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.422081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.422240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.422266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.422445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.422489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.422631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.422680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.422861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.422887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.423037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.423069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.423193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.423220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.423391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.423434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.423663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.423702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.423912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.423942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.424104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.387 [2024-07-26 12:25:53.424132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.387 qpair failed and we were unable to recover it. 00:25:00.387 [2024-07-26 12:25:53.424357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.424386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.424527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.424556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.424694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.424723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.424943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.424988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.425177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.425203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.425381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.425424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.425578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.425603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.425783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.425826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.425980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.426006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.426214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.426245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.426390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.426420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.426596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.426625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.426797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.426826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.426983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.427016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.427167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.427193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.427325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.427351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.427472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.427515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.427685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.427714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.427910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.427938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.428085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.428129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.428277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.428302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.428504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.428533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.428703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.428731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.428903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.428932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.429086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.429113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.429268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.429294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.429474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.429502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.429698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.429727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.429894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.429922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.430135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.430162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.430343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.430368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.430506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.430535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.430705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.430733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.430902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.430930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.431081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.431124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.431252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.431278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.431450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.431479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.431646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.431675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.431812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.431841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.431986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.432011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.432187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.432228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.432354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.432380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.432552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.432581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.432721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.432750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.432926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.432954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.433141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.433180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.433305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.433332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.433534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.433577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.433784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.433828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.434014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.434040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.434198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.434224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.434382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.434412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.388 [2024-07-26 12:25:53.434556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.388 [2024-07-26 12:25:53.434585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.388 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.434747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.434776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.434911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.434940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.435122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.435149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.435304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.435345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.435517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.435546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.435705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.435733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.435914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.435939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.436068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.436094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.436228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.436254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.436408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.436434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.436583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.436622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.436791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.436819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.436987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.437016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.437199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.437225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.437376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.437423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.437563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.437592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.437758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.437787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.437953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.437982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.438165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.438192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.438313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.438355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.438483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.438512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.438672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.438701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.438860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.438888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.439020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.439049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.439232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.439258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.439387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.439413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.439563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.439588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.439734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.439763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.439936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.439965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.440121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.440148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.440300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.440326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.440486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.440511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.440713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.440741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.440930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.440958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.441136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.441163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.441320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.441362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.441498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.441526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.441739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.441764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.441948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.441976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.442135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.442162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.442317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.442343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.442481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.442509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.442654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.442683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.442920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.442949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.443136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.443162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.443362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.443390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.443559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.443587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.443759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.443788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.443927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.443955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.444153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.444179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.444304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.444331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.444541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.444569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.444808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.444837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.444970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.444999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.445202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.445228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.445383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.445423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.445636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.445682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.445833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.445878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.446002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.389 [2024-07-26 12:25:53.446027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.389 qpair failed and we were unable to recover it. 00:25:00.389 [2024-07-26 12:25:53.446169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.446195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.446345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.446390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.446592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.446636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.446839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.446883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.447035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.447066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.447199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.447225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.447376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.447419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.447561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.447589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.447778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.447809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.447947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.447980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.448185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.448211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.448364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.448394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.448536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.448565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.448735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.448763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.448934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.448963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.449143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.449170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.449298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.449324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.449499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.449527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.449719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.449748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.449914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.449943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.450087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.450114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.450287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.450315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.450471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.450499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.450668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.450714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.450868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.450895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.451051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.451094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.451253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.451297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.451467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.451511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.451694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.451740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.451892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.451919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.452124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.452169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.452344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.452372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.452565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.452610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.452766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.452792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.452942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.452968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.453143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.453174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.453324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.453357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.453525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.453553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.453703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.453729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.453865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.453891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.454039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.454073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.454240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.454268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.454411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.454439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.454630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.454658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.454828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.454857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.455026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.455055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.455260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.455286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.455439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.455468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.455635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.455664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.455825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.455854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.456001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.456027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.456163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.456190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.456346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.456372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.456550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.456579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.456718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.456747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.456906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.456935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.457146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.457175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.457337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.390 [2024-07-26 12:25:53.457364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.390 qpair failed and we were unable to recover it. 00:25:00.390 [2024-07-26 12:25:53.457569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.457612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.457761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.457804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.457934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.457960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.458104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.458130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.458271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.458298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.458486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.458519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.458643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.458669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.458799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.458825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.458970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.458996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.459121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.459147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.459322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.459351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.459520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.459549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.459722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.459751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.459893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.459919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.460046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.460077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.460206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.460232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.460432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.460461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.460595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.460624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.460785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.460813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.460957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.460984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.461137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.461164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.461336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.461365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.461506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.461535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.461728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.461757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.461916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.461945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.462115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.462141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.462299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.462325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.462480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.462509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.462692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.462720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.462900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.462926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.463081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.463108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.463287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.463313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.463494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.463522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.463807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.463836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.463981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.464006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.464138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.464164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.464298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.464324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.464486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.464529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.464696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.464724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.464891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.464920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.465057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.465092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.465272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.465298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.465446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.465475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.465669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.465698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.465873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.465901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.466085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.466112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.466293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.466350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.466571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.466615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.466816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.466861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.467017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.467044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.467215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.467243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.467407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.467453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.467622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.467665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.467791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.467817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.467971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.467997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.391 qpair failed and we were unable to recover it. 00:25:00.391 [2024-07-26 12:25:53.468173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.391 [2024-07-26 12:25:53.468218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.468362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.468406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.468587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.468630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.468757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.468782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.468936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.468966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.469115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.469160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.469312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.469356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.469532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.469576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.469725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.469750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.469922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.469948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.470072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.470099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.470243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.470286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.470490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.470532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.470685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.470711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.470861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.470886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.471040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.471091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.471265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.471309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.471461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.471503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.471660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.471686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.471840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.471866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.472043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.472081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.472263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.472310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.472475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.472501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.472655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.472698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.472852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.472878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.473002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.473029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.473213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.473259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.473453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.473496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.473688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.473715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.473844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.473871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.474030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.474065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.474217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.474243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.474393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.474422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.474653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.474681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.474848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.474878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.475025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.475054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.475247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.475275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.475466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.475495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.475636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.475665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.475803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.475831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.476007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.476033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.476200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.476226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.476416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.476445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.476584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.476613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.476790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.476819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.477029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.477075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.477216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.477244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.477418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.477462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.477632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.477678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.477836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.477879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.478009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.478035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.478245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.478288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.478448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.478490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.478659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.478688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.478892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.478918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.479045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.479080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.479238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.479282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.479467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.479494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.479715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.479768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.479925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.479952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.480092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.480120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.480275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.480319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.392 qpair failed and we were unable to recover it. 00:25:00.392 [2024-07-26 12:25:53.480496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.392 [2024-07-26 12:25:53.480542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.480720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.480765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.480895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.480921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.481081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.481107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.481285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.481331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.481511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.481558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.481718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.481744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.481919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.481946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.482124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.482169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.482349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.482396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.482562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.482606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.482760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.482785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.482918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.482944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.483077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.483103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.483279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.483323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.483496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.483540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.483683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.483722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.483890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.483918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.484050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.484083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.484239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.484268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.484466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.484494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.484658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.484686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.484898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.484926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.485106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.485136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.485309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.485352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.485500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.485543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.485746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.485790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.485919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.485946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.486143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.486173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.486312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.486341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.486497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.486525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.486665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.486694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.486857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.486886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.487096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.487122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.487264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.487293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.487433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.487462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.487635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.487669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.487841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.487871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.488040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.488077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.488283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.488312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.488496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.488538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.488720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.488764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.488896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.488923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.489127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.489171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.489312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.489341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.489535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.489563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.489739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.489767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.489956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.489984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.490137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.490163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.490317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.490358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.490553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.490582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.490751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.490780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.490924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.490950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.491071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.491097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.491243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.491268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.491421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.491450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.491590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.491619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.491757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.491786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.491973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.491999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.393 [2024-07-26 12:25:53.492153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.393 [2024-07-26 12:25:53.492179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.393 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.492298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.492350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.492486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.492529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.492664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.492693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.492862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.492895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.493100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.493140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.493302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.493330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.493505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.493550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.493718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.493761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.493923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.493949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.494106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.494132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.494312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.494341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.494562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.494605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.494779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.494823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.495005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.495031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.495211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.495254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.495403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.495446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.495605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.495635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.495807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.495836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.495994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.496020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.496156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.496198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.496341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.496370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.496538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.496567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.496734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.496763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.496925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.496953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.497134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.497161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.497321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.497349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.497493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.497536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.497720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.497763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.497919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.497946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.498075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.498102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.498306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.498352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.498527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.498570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.498753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.498796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.498922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.498948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.499130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.499161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.499311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.499339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.499533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.499562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.499738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.499764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.499943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.499969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.500122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.500148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.500271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.500313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.500464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.500506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.500699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.500727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.500899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.500927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.501144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.501171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.501303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.501328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.501578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.501607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.501808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.501836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.502005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.502034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.502183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.502210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.502413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.502442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.502610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.502639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.502969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.503020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.503215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.503241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.503418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.503446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.503765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.503823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.504021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.504049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.504231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.504261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.504411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.504440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.504598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.504627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.504793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.504822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.394 qpair failed and we were unable to recover it. 00:25:00.394 [2024-07-26 12:25:53.504962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.394 [2024-07-26 12:25:53.504990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.505161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.505188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.505353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.505381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.505626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.505679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.505848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.505876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.506025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.506054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.506232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.506258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.506404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.506432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.506576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.506605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.506869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.506916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.507079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.507118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.507277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.507304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.507582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.507607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.507796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.507821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.508005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.508033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.508222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.508248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.508421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.508449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.508618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.508646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.508855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.508881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.509028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.509054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.509219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.509245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.509419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.509445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.509658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.509687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.509827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.509860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.510034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.510068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.510224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.510250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.510406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.510435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.510601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.510629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.510832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.510860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.511013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.511038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.511204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.511230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.511385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.511413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.511576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.511604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.511745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.511774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.511964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.511992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.512194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.512221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.512353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.512379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.512577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.512606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.512795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.512823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.512982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.513011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.513192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.513218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.513374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.513400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.513578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.513606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.513777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.513805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.513972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.514002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.514177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.514217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.514405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.514433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.514607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.514652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.514868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.514917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.515135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.515162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.515343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.515391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.515669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.515712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.515923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.515966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.516125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.516152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.516327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.516371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.516576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.516618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.516798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.516842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.516967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.516993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.517165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.517211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.517389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.517434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.517740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.517792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.517929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.517956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.518157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.518201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.395 [2024-07-26 12:25:53.518349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.395 [2024-07-26 12:25:53.518392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.395 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.518600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.518643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.518787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.518813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.518941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.518968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.519129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.519173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.519352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.519399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.519576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.519607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.519781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.519809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.519977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.520006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.520183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.520212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.520382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.520410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.520559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.520588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.520758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.520788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.520962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.520991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.521167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.521199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.521344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.521374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.521509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.521538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.521679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.521708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.521876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.521904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.522078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.522121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.522246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.522272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.522444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.522489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.522698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.522741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.522975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.523017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.523151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.523178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.523395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.523438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.523608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.523651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.523816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.523859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.524042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.524073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.524255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.524281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.524515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.524564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.524742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.524785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.524966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.524992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.525197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.525228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.525363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.525392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.525529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.525558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.525729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.525759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.525929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.525957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.526097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.526138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.526327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.526353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.526555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.526583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.526735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.526779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.526945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.526970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.527127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.527153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.527268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.527293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.527474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.527503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.527664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.527692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.527828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.527856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.528000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.528025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.528204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.528231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.528426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.528455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.528651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.528680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.528825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.528854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.529000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.529026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.529184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.529211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.529395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.529424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.529614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.529643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.529845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.529874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.530070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.530099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.530273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.530299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.530478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.396 [2024-07-26 12:25:53.530504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.396 qpair failed and we were unable to recover it. 00:25:00.396 [2024-07-26 12:25:53.530635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.530677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.530853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.530882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.531025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.531051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.531214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.531240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.531412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.531440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.531724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.531778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.531955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.531984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.532165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.532196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.532354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.532380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.532525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.532553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.532700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.532729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.532924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.532953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.533159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.533185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.533362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.533391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.533593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.533619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.533763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.533791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.533928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.533956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.534108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.534134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.534289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.534316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.534508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.534533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.534717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.534745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.534943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.534972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.535159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.535185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.535337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.535363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.535535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.535563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.535755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.535784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.535923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.535951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.536137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.536163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.536284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.536309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.536492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.536517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.536694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.536723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.536921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.536950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.537091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.537118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.537249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.537275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.537423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.537456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.537699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.537753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.537922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.537951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.538150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.538177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.538327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.538353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.538508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.538536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.538753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.538781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.538947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.538976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.539155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.539182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.539353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.539390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.539720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.539778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.539980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.540009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.540166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.540192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.540327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.540353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.540483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.540509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.540636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.540662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.540848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.540873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.541054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.541099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.541259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.541288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.541465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.541490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.541656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.541685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.541848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.541877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.542064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.542090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.542237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.542262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.542422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.542450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.542622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.542648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.542799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.542825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.542944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.542970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.543129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.543156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.543327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.543356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.543504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.543532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.543708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.543734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.397 [2024-07-26 12:25:53.543936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.397 [2024-07-26 12:25:53.543964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.397 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.544110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.544138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.544305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.544331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.544530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.544558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.544690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.544719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.544920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.544945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.545117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.545145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.545340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.545368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.545524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.545550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.545729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.545772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.545908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.545936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.546133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.546159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.546325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.546353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.546521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.546549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.546723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.546750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.546948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.546977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.547124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.547153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.547322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.547347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.547517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.547545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.547705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.547733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.547969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.547997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.548175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.548201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.548321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.548347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.548538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.548563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.548694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.548719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.548900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.548942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.549082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.549108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.549262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.549287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.549412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.549448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.549596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.549621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.549791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.549819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.549959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.549987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.550147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.550173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.550341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.550370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.550530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.550558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.550736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.550762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.550909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.550938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.551096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.551123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.551281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.551307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.551507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.551535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.551692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.551717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.551898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.551924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.552134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.552160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.552316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.552342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.552521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.552547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.552720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.552748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.552914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.552942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.553110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.553136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.553314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.553343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.553510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.553538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.553744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.553770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.553910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.553938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.554132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.554161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.554308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.554333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.554465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.554490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.554670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.554712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.554878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.554906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.555101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.555127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.555251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.555277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.555434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.555459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.555580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.555606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.555752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.555778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.555970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.555995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.556164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.556197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.556323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.556352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.556511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.556538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.556667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.556693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.556872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.556914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.398 [2024-07-26 12:25:53.557071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.398 [2024-07-26 12:25:53.557098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.398 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.557227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.557253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.557448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.557474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.557593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.557618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.557766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.557809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.557944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.557972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.558151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.558177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.558332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.558358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.558555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.558583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.558783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.558809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.558971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.558997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.559171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.559198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.559377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.559402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.559544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.559572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.559766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.559795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.559940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.559967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.560167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.560197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.560369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.560397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.560594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.560620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.560780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.560808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.560970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.560998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.561183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.561209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.561357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.561402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.561532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.561561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.561736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.561762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.561936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.561964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.562165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.562191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.562340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.562366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.562486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.562529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.562697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.562726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.562895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.562920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.563126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.563155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.563315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.563344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.563523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.563549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.563708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.563751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.563917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.563946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.564122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.564149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.564320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.564348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.564514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.564539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.564718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.564744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.564947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.564975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.565169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.565198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.565346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.565371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.565560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.565589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.565725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.565753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.565901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.565926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.566120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.566149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.566329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.566355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.566480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.566505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.566641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.566666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.566805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.566834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.566984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.567009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.567184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.567227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.567428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.567454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.567612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.567637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.567784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.567809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.567986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.568015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.568211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.568238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.568413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.568441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.568643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.568668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.568847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.568873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.569050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.569084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.569238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.569264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.569443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.569469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.569662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.569691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.569829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.569858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.570004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.570029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.570186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.570212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.570406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.570434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.570569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.570595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.570752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.570794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.570975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.571000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.571118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.571153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.571289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.571315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.571445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.571470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.571647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.571673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.571841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.571870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.572042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.572076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.572247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.572273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.572397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.572440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.572578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.572607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.572751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.399 [2024-07-26 12:25:53.572776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.399 qpair failed and we were unable to recover it. 00:25:00.399 [2024-07-26 12:25:53.572917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.572942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.573110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.573139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.573307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.573333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.573459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.573501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.573671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.573699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.573877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.573902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.574068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.574097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.574261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.574289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.574461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.574491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.574662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.574690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.574859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.574888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.575085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.575128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.575277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.575302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.575473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.575502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.575678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.575703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.575859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.575902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.576057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.576103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.576276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.576303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.576430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.576456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.576635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.576664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.576837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.576862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.577057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.577091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.577292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.577318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.577473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.577499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.577648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.577687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.577843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.577868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.578022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.578048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.578220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.578248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.578430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.578455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.578611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.578637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.578811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.578839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.578977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.579007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.579173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.579199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.579355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.579380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.579557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.579586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.579788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.579817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.579968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.579996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.580175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.580201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.580322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.580347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.580496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.580537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.580708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.580736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.580946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.580974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.581145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.581171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.581291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.581317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.581466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.581491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.581695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.581723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.581894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.581923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.582079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.582105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.582259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.582285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.582449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.582477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.582646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.582671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.582793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.582819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.582973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.582999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.583152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.583178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.583310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.583335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.583511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.583540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.583705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.583731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.583861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.583903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.584074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.584100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.584279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.584305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.584476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.584504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.584664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.584692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.584890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.584916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.585093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.585123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.585302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.585328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.585452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.585478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.585626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.585651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.585821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.585846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.586026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.586052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.586212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.586237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.586431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.586460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.586655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.586680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.586884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.586913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.587056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.587103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.587259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.587285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.587442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.587467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.587647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.587675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.587856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.587882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.588038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.588070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.588221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.588246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.588370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.400 [2024-07-26 12:25:53.588396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.400 qpair failed and we were unable to recover it. 00:25:00.400 [2024-07-26 12:25:53.588572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.588598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.588803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.588829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.588975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.589001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.589173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.589203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.589377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.589405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.589547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.589573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.589725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.589751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.589878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.589904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.590089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.590115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.590334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.590360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.590553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.590581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.590760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.590786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.590934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.590960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.591108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.591150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.591327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.591353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.591508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.591534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.591706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.591734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.591928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.591957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.592096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.592122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.592274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.592300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.592508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.592534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.592734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.592762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.592923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.592956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.593157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.593183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.593359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.593388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.593574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.593602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.593777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.593803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.593937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.593964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.594140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.594169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.594313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.594338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.594467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.594494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.594701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.594729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.594900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.594925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.595081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.595107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.595300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.595328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.595503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.595528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.595703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.595731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.595907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.595933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.596109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.596135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.596331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.596360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.596530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.596556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.596708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.596734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.596889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.596914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.597083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.597112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.597318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.597344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.597496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.597522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.597675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.597700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.597853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.597878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.598001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.598026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.598209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.598239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.598417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.598442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.598610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.598638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.598804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.598832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.598995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.599023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.599203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.599229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.599362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.599388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.599571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.599596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.599770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.599798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.599958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.599986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.600124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.600150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.600305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.600332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.600489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.600514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.600661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.600686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.600860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.600889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.601054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.601089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.601257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.601283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.601480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.601508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.601713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.601739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.601857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.601882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.602099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.602128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.602319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.602347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.401 qpair failed and we were unable to recover it. 00:25:00.401 [2024-07-26 12:25:53.602521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.401 [2024-07-26 12:25:53.602547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.602678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.602704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.602854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.602895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.603068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.603094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.603250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.603275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.603419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.603451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.603667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.603693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.603856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.603884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.604071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.604097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.604246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.604271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.604435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.604463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.604636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.604665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.604836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.604861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.605043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.605077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.605254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.605279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.605458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.605484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.605651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.605680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.605806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.605834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.606010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.606036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.606158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.606202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.606367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.606395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.606546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.606573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.606770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.606798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.607009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.607034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.607217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.607243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.607412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.607440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.607610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.607636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.607814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.607840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.607978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.608006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.608160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.608186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.608344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.608370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.608572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.608601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.608766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.608794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.608978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.609004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.609163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.609189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.609364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.609393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.609573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.609598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.609727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.609753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.609931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.609960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.610117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.610143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.610284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.610310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.610462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.610488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.610631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.610656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.610854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.610883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.611046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.611090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.611262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.611288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.611460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.611486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.611666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.611692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.611850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.611893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.612071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.612115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.612248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.612277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.612433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.612459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.612611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.612636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.612782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.612808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.612956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.612982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.613134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.613160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.613296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.613332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.613519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.613545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.613715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.613754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.613897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.613925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.614107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.614133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.614331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.614359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.614523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.614551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.402 [2024-07-26 12:25:53.614705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.402 [2024-07-26 12:25:53.614742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.402 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.614875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.614903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.615063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.615107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.615257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.615283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.615398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.615424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.615599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.615624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.615777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.615803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.615959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.615985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.616187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.616215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.616385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.616411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.616540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.685 [2024-07-26 12:25:53.616588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.685 qpair failed and we were unable to recover it. 00:25:00.685 [2024-07-26 12:25:53.616783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.616811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.616982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.617008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.617140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.617166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.617342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.617368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.617547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.617572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.617695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.617737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.617872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.617900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.618048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.618078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.618203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.618243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.618402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.618430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.618577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.618603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.618754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.618797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.618991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.619019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.619201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.619227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.619401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.619429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.619589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.619615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.619770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.619796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.619968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.619996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.620176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.620202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.620351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.620376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.620581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.620609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.620739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.620768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.620966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.620992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.621145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.621174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.621334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.621363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.621565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.621591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.621771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.621804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.622009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.622035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.622221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.622247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.622429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.622458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.622646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.622675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.622827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.622852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.622986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.623013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.623176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.623217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.623421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.686 [2024-07-26 12:25:53.623447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.686 qpair failed and we were unable to recover it. 00:25:00.686 [2024-07-26 12:25:53.623619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.623648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.623825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.623850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.624028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.624053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.624237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.624265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.624435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.624464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.624665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.624691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.624854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.624882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.625052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.625087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.625259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.625285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.625436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.625478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.625661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.625687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.625840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.625865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.626022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.626047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.626224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.626252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.626421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.626446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.626596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.626621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.626769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.626794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.626970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.626999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.627179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.627205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.627344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.627370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.627520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.627546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.627708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.627734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.627899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.627927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.628094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.628120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.628289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.628317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.628458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.628487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.628631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.628657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.628809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.628834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.629039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.629076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.629227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.629252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.629453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.629482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.629671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.629699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.629903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.629929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.630078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.630106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.630292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.630318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.687 qpair failed and we were unable to recover it. 00:25:00.687 [2024-07-26 12:25:53.630498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.687 [2024-07-26 12:25:53.630523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.630679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.630705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.630908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.630936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.631106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.631132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.631258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.631283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.631415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.631440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.631652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.631678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.631823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.631852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.631990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.632017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.632179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.632205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.632322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.632347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.632544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.632573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.632728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.632754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.632950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.632978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.633130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.633156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.633317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.633342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.633472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.633497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.633673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.633701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.633848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.633874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.634039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.634069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.634258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.634286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.634432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.634459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.634613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.634639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.634791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.634834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.634969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.634999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.635145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.635171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.635352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.635395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.635571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.635596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.635797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.635825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.635971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.636000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.636199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.636224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.636396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.636425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.636594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.636623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.636798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.636825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.637023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.637077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.688 qpair failed and we were unable to recover it. 00:25:00.688 [2024-07-26 12:25:53.637244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.688 [2024-07-26 12:25:53.637273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.637447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.637473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.637630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.637656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.637808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.637834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.637958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.637984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.638160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.638189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.638389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.638417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.638619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.638645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.638816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.638844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.639013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.639041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.639247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.639273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.639403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.639429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.639551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.639577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.639759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.639785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.639962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.639990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.640167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.640193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.640316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.640346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.640477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.640503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.640659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.640684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.640837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.640863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.641009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.641034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.641222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.641248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.641373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.641399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.641544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.641570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.641714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.641742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.641920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.641945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.642101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.642127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.642276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.642301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.642477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.642503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.642671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.642699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.642898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.642926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.643099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.643125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.643324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.643352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.643548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.643576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.643756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.643783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.643942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.689 [2024-07-26 12:25:53.643968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.689 qpair failed and we were unable to recover it. 00:25:00.689 [2024-07-26 12:25:53.644122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.644148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.644305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.644330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.644498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.644527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.644685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.644714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.644864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.644891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.645050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.645091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.645252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.645278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.645404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.645434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.645613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.645657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.645866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.645891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.646069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.646095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.646245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.646274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.646468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.646493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.646670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.646695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.646891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.646920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.647076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.647102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.647252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.647277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.647449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.647478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.647665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.647693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.647835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.647861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.648010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.648052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.648211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.648237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.648371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.648396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.648592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.648620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.648801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.648826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.648953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.648978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.649161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.649187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.649381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.649409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.649552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.649579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.690 [2024-07-26 12:25:53.649727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.690 [2024-07-26 12:25:53.649769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.690 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.649934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.649962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.650165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.650191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.650355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.650383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.650517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.650545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.650724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.650749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.650883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.650909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.651084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.651113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.651285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.651310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.651457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.651483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.651660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.651689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.651854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.651880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.652040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.652072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.652246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.652274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.652415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.652440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.652617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.652658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.652860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.652885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.653084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.653110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.653312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.653341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.653535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.653564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.653739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.653766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.653938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.653966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.654139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.654168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.654318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.654344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.654500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.654525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.654679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.654705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.654858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.654883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.655005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.655030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.655167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.655193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.655348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.655374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.655508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.655535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.655713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.655741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.655916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.655942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.656073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.656100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.656251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.656277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.656420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.656446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.691 qpair failed and we were unable to recover it. 00:25:00.691 [2024-07-26 12:25:53.656604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.691 [2024-07-26 12:25:53.656646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.656856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.656882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.656998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.657023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.657181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.657225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.657375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.657403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.657573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.657598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.657767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.657797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.657961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.657990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.658142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.658168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.658330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.658358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.658523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.658556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.658711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.658736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.658897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.658922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.659115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.659141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.659273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.659299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.659420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.659446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.659622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.659664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.659840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.659865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.659994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.660042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.660191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.660216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.660357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.660382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.660518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.660544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.660663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.660688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.660813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.660839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.660971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.661015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.661201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.661230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.661379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.661404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.661561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.661586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.661713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.661754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.661928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.661953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.662090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.662116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.662328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.662356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.662500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.662526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.662676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.662718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.662860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.662889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.692 [2024-07-26 12:25:53.663038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.692 [2024-07-26 12:25:53.663068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.692 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.663202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.663229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.663373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.663403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.663554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.663579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.663758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.663786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.663937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.663963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.664143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.664169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.664344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.664372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.664549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.664574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.664727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.664752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.664871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.664914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.665111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.665139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.665313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.665338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.665482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.665507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.665654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.665695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.665848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.665873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.666030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.666074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.666248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.666274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.666424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.666450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.666618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.666643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.666788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.666832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.667005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.667030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.667208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.667237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.667399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.667427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.667595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.667621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.667823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.667851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.668049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.668085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.668239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.668265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.668439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.668468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.668638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.668666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.668823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.668849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.669002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.669027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.669194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.669220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.669352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.669378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.669507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.669532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.669707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.693 [2024-07-26 12:25:53.669736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.693 qpair failed and we were unable to recover it. 00:25:00.693 [2024-07-26 12:25:53.669890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.669915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.670045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.670084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.670237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.670263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.670384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.670410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.670578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.670606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.670772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.670800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.670974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.671000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.671128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.671171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.671332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.671360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.671503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.671528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.671687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.671713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.671842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.671883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.672116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.672143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.672296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.672322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.672497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.672525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.672696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.672721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.672837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.672863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.673046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.673076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.673230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.673256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.673403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.673432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.673595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.673623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.673825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.673850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.673990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.674018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.674239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.674265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.674428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.674453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.674630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.674658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.674855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.674883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.675047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.675079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.675250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.675276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.675391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.675416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.675538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.675564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.675732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.675761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.675897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.675925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.676072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.676098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.676224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.676270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.676404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.676433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.694 qpair failed and we were unable to recover it. 00:25:00.694 [2024-07-26 12:25:53.676627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.694 [2024-07-26 12:25:53.676653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.676825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.676853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.677022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.677051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.677206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.677232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.677364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.677389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.677544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.677570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.677684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.677710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.677846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.677888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.678084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.678113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.678287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.678313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.678478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.678506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.678674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.678704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.678876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.678901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.679079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.679107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.679265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.679293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.679483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.679509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.679680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.679709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.679928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.679957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.680120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.680146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.680319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.680362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.680556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.680585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.680740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.680766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.680901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.680927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.681075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.681105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.681309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.681334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.681510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.681544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.681710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.681738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.681891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.681916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.682068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.682094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.682249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.682277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.682421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.682447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.695 [2024-07-26 12:25:53.682597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.695 [2024-07-26 12:25:53.682622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.695 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.682792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.682820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.682996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.683021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.683157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.683183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.683306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.683332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.683496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.683521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.683696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.683724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.683899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.683927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.684098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.684124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.684256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.684281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.684474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.684502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.684697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.684723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.684896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.684924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.685095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.685124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.685300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.685325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.685454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.685479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.685658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.685686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.685836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.685862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.685993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.686018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.686185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.686211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.686341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.686366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.686537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.686570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.686740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.686768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.686922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.686947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.687074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.687100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.687242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.687270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.687439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.687465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.687632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.687660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.687851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.687879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.688095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.688137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.688267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.688293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.688442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.688483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.688633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.688659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.688791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.688817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.688941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.688967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.689145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.696 [2024-07-26 12:25:53.689171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.696 qpair failed and we were unable to recover it. 00:25:00.696 [2024-07-26 12:25:53.689310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.689338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.689512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.689538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.689660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.689685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.689862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.689905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.690068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.690097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.690250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.690275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.690427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.690453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.690641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.690670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.690842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.690867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.691048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.691097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.691329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.691369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.691573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.691601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.691778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.691807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.691973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.692002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.692191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.692229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.692396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.692437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.692654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.692690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.692901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.692938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.693136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.693168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.693304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.693333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.693486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.693512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.693707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.693748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.693928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.693964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.694145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.694182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.694376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.694417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.694598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.694639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.694867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.694908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.695151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.695188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.695347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.695383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.695565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.695601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.695801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.695840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.696030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.696080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.696293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.696329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.696497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.696527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.696693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.696722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.696896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.697 [2024-07-26 12:25:53.696921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.697 qpair failed and we were unable to recover it. 00:25:00.697 [2024-07-26 12:25:53.697082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.697119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.697265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.697302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.697519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.697555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.697715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.697751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.697955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.697987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.698179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.698206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.698404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.698433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.698611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.698640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.698818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.698844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.698979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.699004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.699147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.699176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.699352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.699388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.699620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.699660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.699868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.699903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.700125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.700154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.700355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.700384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.700532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.700560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.700735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.700765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.700998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.701038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.701255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.701295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.701495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.701531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.701684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.701721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.701951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.701992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.702192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.702229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.702407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.702446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.702637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.702676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.702883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.702910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.703119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.703156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.703286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.703313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.703520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.703545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.703745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.703786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.703964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.704003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.704206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.704243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.704442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.704481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.704647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.704682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.698 [2024-07-26 12:25:53.704856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.698 [2024-07-26 12:25:53.704882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.698 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.705019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.705045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.705227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.705255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.705419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.705455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.705634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.705669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.705876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.705916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.706095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.706132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.706305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.706332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.706502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.706528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.706685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.706718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.706922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.706963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.707145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.707185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.707360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.707395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.707617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.707659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.707873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.707902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.708103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.708130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.708297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.708337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.708513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.708549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.708725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.708761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.708933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.708968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.709166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.709207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.709379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.709406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.709534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.709576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.709745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.709775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.709991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.710031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.710230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.710266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.710433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.710472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.710701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.710731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.710865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.710891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.711089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.711119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.711293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.711327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.711492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.711532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.711693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.711732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.711931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.711967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.712132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.712174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.712343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.712374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.699 [2024-07-26 12:25:53.712555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.699 [2024-07-26 12:25:53.712581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.699 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.712713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.712739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.712910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.712939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.713103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.713130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.713262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.713288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.713414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.713439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.713566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.713602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.713745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.713781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.713967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.714006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.714241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.714278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.714480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.714511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.714706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.714735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.714877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.714903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.715084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.715129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.715336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.715375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.715507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.715534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.715681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.715725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.715918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.715947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.716128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.716155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.716291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.716318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.716456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.716487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.716647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.716673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.716824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.716857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.717038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.717070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.717257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.717283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.717414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.717441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.717598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.717630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.717782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.717812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.718015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.718054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.718211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.718237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.718369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.700 [2024-07-26 12:25:53.718402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.700 qpair failed and we were unable to recover it. 00:25:00.700 [2024-07-26 12:25:53.718576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.718603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.718800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.718830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.718989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.719014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.719181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.719215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.719363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.719390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.719587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.719614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.719754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.719781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.719957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.719986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.720150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.720178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.720373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.720401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.720562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.720591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.720733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.720759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.720937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.720967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.721168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.721195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.721341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.721367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.721526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.721553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.721707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.721733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.721873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.721899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.722032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.722074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.722235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.722261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.722386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.722422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.722579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.722606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.722786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.722815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.722997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.723024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.723185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.723211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.723358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.723387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.723563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.723589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.723771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.723801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.724019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.724048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.724261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.724288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.724419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.724445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.724602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.724643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.724808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.724835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.725005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.725042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.725199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.701 [2024-07-26 12:25:53.725225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.701 qpair failed and we were unable to recover it. 00:25:00.701 [2024-07-26 12:25:53.725360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.725395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.725556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.725587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.725761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.725798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.725946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.725972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.726126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.726154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.726307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.726360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.726526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.726552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.726685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.726727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.726899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.726928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.727121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.727148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.727284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.727309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.727459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.727490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.727670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.727697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.727848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.727877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.728049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.728085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.728246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.728271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.728441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.728471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.728668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.728708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.728860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.728886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.729021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.729085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.729240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.729272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.729431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.729459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.729613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.729656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.729828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.729857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.730006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.730033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.730195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.730222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.730348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.730375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.730557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.730587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.730765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.730796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.730955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.730984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.731188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.731215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.731393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.731427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.731598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.731624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.731775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.731812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.731940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.702 [2024-07-26 12:25:53.731966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.702 qpair failed and we were unable to recover it. 00:25:00.702 [2024-07-26 12:25:53.732148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.732174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.732361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.732396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.732569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.732609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.732861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.732900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.733040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.733077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.733199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.733235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.733386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.733440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.733622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.733659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.733949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.734021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.734216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.734245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.734407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.734435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.734632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.734662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.734902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.734929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.735084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.735111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.735238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.735265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.735428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.735456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.735616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.735642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.735796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.735822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.735976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.736002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.736200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.736229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.736368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.736394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.736577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.736604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.736791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.736819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.737064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.737109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.737238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.737266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.737430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.737457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.737586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.737629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.737801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.737829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.737983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.738009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.738244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.738271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.738457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.738511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.738706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.738733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.738893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.738919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.739055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.739109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.739288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.739315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.703 qpair failed and we were unable to recover it. 00:25:00.703 [2024-07-26 12:25:53.739460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.703 [2024-07-26 12:25:53.739489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.739652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.739682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.739827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.739856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.740010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.740037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.740173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.740202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.740401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.740427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.740557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.740583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.740727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.740754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.740926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.740953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.741132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.741158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.741301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.741328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.741522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.741548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.741748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.741801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.741939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.741968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.742155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.742181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.742307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.742333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.742514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.742543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.742724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.742753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.742951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.742980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.743169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.743198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.743358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.743384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.743634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.743663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.743840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.743869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.744011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.744038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.744223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.744250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.744575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.744626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.744804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.744830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.745066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.745093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.745252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.745278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.745432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.745459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.745658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.745688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.746009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.746068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.746246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.746272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.746447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.746477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.746651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.746677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.704 [2024-07-26 12:25:53.746865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.704 [2024-07-26 12:25:53.746891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.704 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.747045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.747078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.747228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.747254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.747437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.747467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.747615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.747644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.747832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.747861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.748029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.748070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.748243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.748269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.748424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.748468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.748673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.748699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.748876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.748904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.749122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.749149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.749305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.749332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.749531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.749560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.749778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.749833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.749987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.750013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.750164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.750190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.750351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.750377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.750533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.750560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.750754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.750782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.750977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.751006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.751187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.751214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.751334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.751376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.751579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.751605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.751757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.751784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.751950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.751978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.752142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.752172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.752344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.752370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.752522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.752548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.752735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.752764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.705 [2024-07-26 12:25:53.752968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.705 [2024-07-26 12:25:53.752994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.705 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.753191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.753221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.753387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.753416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.753580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.753606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.753759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.753802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.753952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.753979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.754165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.754192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.754364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.754393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.754586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.754615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.754792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.754818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.754992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.755022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.755266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.755292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.755524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.755550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.755723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.755756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.755921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.755951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.756132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.756159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.756329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.756359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.756530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.756560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.756759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.756785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.756948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.756977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.757175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.757206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.757379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.757405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.757554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.757580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.757730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.757773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.757916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.757943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.758121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.758166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.758311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.758340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.758483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.758509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.758662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.758705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.758861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.758890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.759030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.759055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.759198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.759225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.759346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.759372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.759590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.759616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.759785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.759811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.760006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.706 [2024-07-26 12:25:53.760035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.706 qpair failed and we were unable to recover it. 00:25:00.706 [2024-07-26 12:25:53.760225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.760252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.760401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.760430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.760595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.760624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.760796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.760822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.761030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.761067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.761260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.761286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.761466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.761492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.761667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.761695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.761854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.761883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.762028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.762054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.762188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.762214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.762391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.762434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.762596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.762621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.762816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.762844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.762985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.763013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.763179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.763206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.763408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.763437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.763594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.763626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.763765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.763791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.763916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.763958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.764140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.764166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.764342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.764367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.764513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.764539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.764710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.764739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.764941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.764967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.765110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.765140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.765314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.765343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.765493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.765519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.765671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.765714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.765908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.765936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.766107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.766134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.766310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.766339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.766474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.766503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.766679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.766705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.766906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.766935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.707 qpair failed and we were unable to recover it. 00:25:00.707 [2024-07-26 12:25:53.767098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.707 [2024-07-26 12:25:53.767128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.767302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.767327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.767480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.767523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.767658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.767687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.767861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.767887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.768092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.768122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.768291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.768319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.768496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.768522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.768647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.768675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.768833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.768875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.769072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.769115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.769296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.769322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.769519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.769545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.769699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.769725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.769884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.769909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.770068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.770109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.770298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.770324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.770458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.770483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.770658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.770684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.770836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.770862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.771065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.771094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.771294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.771320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.771475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.771507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.771681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.771710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.771882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.771911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.772115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.772143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.772292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.772321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.772485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.772514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.772686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.772712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.772855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.772898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.773097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.773126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.773278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.773304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.773435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.773461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.773615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.773642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.773796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.773821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.774024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.708 [2024-07-26 12:25:53.774052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.708 qpair failed and we were unable to recover it. 00:25:00.708 [2024-07-26 12:25:53.774201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.774230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.774430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.774456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.774654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.774682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.774870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.774898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.775076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.775102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.775272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.775300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.775470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.775498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.775696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.775721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.775884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.775910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.776110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.776140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.776292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.776319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.776474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.776500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.776698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.776727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.776928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.776956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.777095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.777138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.777273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.777298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.777456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.777482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.777637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.777663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.777862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.777888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.778010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.778036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.778198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.778225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.778423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.778452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.778596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.778621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.778773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.778799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.778947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.778977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.779150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.779185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.779320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.779369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.779567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.779594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.779773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.779799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.779962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.779990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.780130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.780159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.780336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.780362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.780540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.780569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.780733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.780761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.780957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.780982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.781201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.709 [2024-07-26 12:25:53.781230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.709 qpair failed and we were unable to recover it. 00:25:00.709 [2024-07-26 12:25:53.781367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.781398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.781592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.781619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.781790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.781819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.781992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.782020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.782183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.782210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.782410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.782438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.782605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.782633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.782834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.782859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.783000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.783029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.783210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.783239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.783439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.783465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.783624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.783650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.783820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.783848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.784080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.784124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.784302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.784347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.784546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.784572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.784727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.784753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.784927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.784955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.785149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.785177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.785326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.785352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.785510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.785555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.785745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.785774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.785946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.785972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.786174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.786203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.786364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.786393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.786540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.786566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.786719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.786761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.786941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.786966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.787146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.787172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.787302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.710 [2024-07-26 12:25:53.787328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.710 qpair failed and we were unable to recover it. 00:25:00.710 [2024-07-26 12:25:53.787508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.787537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.787692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.787718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.787890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.787920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.788078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.788105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.788262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.788288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.788461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.788498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.788693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.788721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.788866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.788893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.789092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.789121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.789285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.789314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.789514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.789540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.789756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.789784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.789955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.789983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.790177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.790204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.790354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.790384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.790590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.790617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.790768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.790794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.790966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.790995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.791143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.791170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.791324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.791351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.791529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.791554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.791714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.791756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.791929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.791955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.792108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.792135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.792294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.792319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.792499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.792524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.792704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.792732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.792932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.792961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.793138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.793165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.793358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.793387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.793562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.793588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.793766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.793792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.793943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.793969] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.794169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.794199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.794400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.794426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.711 qpair failed and we were unable to recover it. 00:25:00.711 [2024-07-26 12:25:53.794581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.711 [2024-07-26 12:25:53.794608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.794804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.794832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.795004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.795030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.795234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.795264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.795398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.795428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.795606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.795637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.795798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.795824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.795951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.795978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.796129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.796156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.796334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.796377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.796547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.796575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.796749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.796775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.796978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.797006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.797182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.797212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.797382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.797408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.797585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.797613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.797776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.797805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.797966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.797992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.798120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.798147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.798326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.798370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.798544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.798570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.798704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.798731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.798934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.798963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.799132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.799159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.799284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.799310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.799487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.799514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.799670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.799697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.799846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.799872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.800026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.800052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.800195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.800221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.800369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.800395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.800549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.800577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.800794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.800821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.800958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.800984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.801146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.801190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.712 qpair failed and we were unable to recover it. 00:25:00.712 [2024-07-26 12:25:53.801395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.712 [2024-07-26 12:25:53.801421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.801595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.801624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.801803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.801829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.801974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.801999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.802186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.802217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.802397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.802425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.802605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.802632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.802801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.802830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.802993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.803022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.803213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.803240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.803443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.803476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.803654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.803683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.803858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.803884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.804102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.804128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.804275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.804300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.804457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.804484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.804682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.804711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.804882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.804909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.805029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.805055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.805212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.805239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.805435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.805464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.805633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.805659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.805813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.805839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.806008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.806038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.806219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.806245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.806398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.806425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.806572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.806598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.806753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.806780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.806958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.806987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.807182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.807212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.807349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.807375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.807526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.807552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.807757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.807785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.808016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.808045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.808218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.808245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.808429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.713 [2024-07-26 12:25:53.808457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.713 qpair failed and we were unable to recover it. 00:25:00.713 [2024-07-26 12:25:53.808602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.808628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.808785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.808827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.809026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.809055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.809239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.809266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.809421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.809447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.809572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.809615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.809793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.809820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.809949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.809975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.810134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.810161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.810315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.810342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.810549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.810578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.810756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.810782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.810928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.810953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.811135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.811165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.811332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.811362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.811540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.811566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.811739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.811769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.811929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.811958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.812131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.812158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.812306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.812333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.812463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.812489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.812668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.812695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.812863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.812892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.813064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.813094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.813276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.813303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.813501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.813530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.813738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.813764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.813915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.813941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.814127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.814158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.814358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.814384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.814569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.814595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.814777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.814806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.815002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.815031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.815190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.815217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.815372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.815398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.714 [2024-07-26 12:25:53.815592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.714 [2024-07-26 12:25:53.815621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.714 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.815821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.815848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.815982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.816010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.816207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.816237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.816414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.816440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.816575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.816602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.816757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.816804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.816955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.816982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.817176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.817206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.817374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.817404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.817551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.817577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.817724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.817751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.817931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.817957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.818140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.818167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.818344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.818370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.818550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.818576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.818758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.818784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.818928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.818957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.819136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.819166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.819314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.819340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.819471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.819498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.819695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.819724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.819864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.819891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.820012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.820039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.820244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.820273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.820473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.820499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.820697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.820726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.820871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.820901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.821103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.821130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.821309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.821338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.821507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.821533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.821712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.715 [2024-07-26 12:25:53.821738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.715 qpair failed and we were unable to recover it. 00:25:00.715 [2024-07-26 12:25:53.821942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.821971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.822145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.822175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.822330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.822357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.822506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.822532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.822709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.822737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.822900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.822929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.823119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.823145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.823273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.823299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.823455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.823481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.823632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.823658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.823843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.823870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.823998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.824024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.824186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.824213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.824411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.824439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.824602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.824633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.824830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.824858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.825023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.825052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.825220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.825246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.825405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.825431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.825595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.825624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.825793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.825819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.826020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.826048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.826235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.826264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.826439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.826465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.826593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.826618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.826770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.826796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.826923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.826950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.827076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.827103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.827268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.827311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.827447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.827473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.827602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.827628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.827839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.827868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.828015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.828041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.828223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.828253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.828429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.828455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.828606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.716 [2024-07-26 12:25:53.828632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.716 qpair failed and we were unable to recover it. 00:25:00.716 [2024-07-26 12:25:53.828811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.828841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.829002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.829030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.829176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.829203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.829363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.829390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.829549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.829576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.829731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.829758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.829957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.829986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.830161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.830188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.830316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.830343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.830494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.830520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.830674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.830718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.830914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.830940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.831108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.831138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.831334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.831363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.831505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.831531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.831676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.831719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.831912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.831940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.832111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.832138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.832334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.832367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.832566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.832595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.832765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.832791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.832923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.832950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.833158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.833188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.833359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.833385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.833555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.833584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.833776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.833805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.833951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.833977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.834126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.834169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.834364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.834393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.834533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.834559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.834707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.834750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.834947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.834975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.835152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.835179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.835377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.835406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.835570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.835599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.717 [2024-07-26 12:25:53.835800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.717 [2024-07-26 12:25:53.835826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.717 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.836000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.836028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.836203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.836232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.836406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.836432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.836580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.836606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.836792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.836836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.836991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.837018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.837173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.837199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.837401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.837430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.837604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.837631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.837786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.837829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.838009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.838053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.838234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.838261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.838430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.838460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.838642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.838669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.838847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.838873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.839004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.839030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.839154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.839180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.839337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.839362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.839495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.839521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.839705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.839730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.839912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.839938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.840056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.840108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.840294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.840325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.840481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.840507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.840667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.840696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.840889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.840917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.841138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.841165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.841336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.841364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.841658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.841720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.841901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.841927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.842101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.842129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.842298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.842328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.842509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.842536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.842714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.842743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.718 [2024-07-26 12:25:53.842875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.718 [2024-07-26 12:25:53.842904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.718 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.843105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.843132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.843278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.843306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.843591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.843648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.843858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.843884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.844083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.844112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.844303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.844347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.844555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.844583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.844785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.844814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.845017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.845046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.845199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.845226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.845353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.845395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.845562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.845591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.845762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.845788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.845957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.845986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.846146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.846173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.846323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.846350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.846493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.846522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.846727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.846754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.846936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.846963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.847163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.847193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.847355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.847383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.847587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.847613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.847787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.847816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.847980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.848009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.848180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.848207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.848372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.848402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.848654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.848681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.848860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.848890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.849085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.849115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.849286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.849315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.849460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.849486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.849613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.849640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.849823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.849852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.850054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.850086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.850260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.850289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.719 [2024-07-26 12:25:53.850489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.719 [2024-07-26 12:25:53.850516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.719 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.850662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.850688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.850890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.850919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.851123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.851152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.851330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.851357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.851515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.851541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.851698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.851724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.851878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.851905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.852055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.852131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.852326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.852355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.852521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.852548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.852703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.852729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.852886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.852912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.853115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.853142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.853293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.853319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.853495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.853524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.853694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.853721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.853889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.853918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.854114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.854146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.854304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.854330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.854483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.854527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.854705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.854731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.854884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.854910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.855080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.855109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.855276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.855306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.855497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.855523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.855699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.855727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.855911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.855937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.856119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.856145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.856272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.856298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.856434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.856460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.720 [2024-07-26 12:25:53.856611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.720 [2024-07-26 12:25:53.856637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.720 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.856836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.856869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.857057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.857179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.857343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.857372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.857577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.857606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.857891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.857938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.858122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.858149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.858324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.858353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.858662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.858714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.858885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.858912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.859118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.859148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.859293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.859323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.859480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.859507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.859659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.859686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.859837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.859883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.860104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.860131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.860308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.860335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.860515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.860545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.860700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.860727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.860931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.860960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.861103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.861133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.861281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.861307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.861461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.861504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.861777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.861827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.861977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.862004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.862168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.862210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.862406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.862435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.862610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.862637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.862782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.862812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.862945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.862974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.863133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.863160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.863316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.863343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.863509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.863540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.863719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.863745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.863869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.863895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.864048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.721 [2024-07-26 12:25:53.864086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.721 qpair failed and we were unable to recover it. 00:25:00.721 [2024-07-26 12:25:53.864266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.864292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.864437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.864465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.864764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.864821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.864995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.865022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.865184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.865211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.865377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.865410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.865588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.865613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.865784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.865816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.866002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.866028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.866198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.866225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.866398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.866427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.866595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.866623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.866803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.866828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.867004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.867034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.867191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.867218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.867374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.867400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.867553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.867597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.867764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.867792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.867951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.867988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.868173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.868200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.868360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.868386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.868569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.868595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.868803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.868832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.869001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.869031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.869214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.869241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.869401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.869429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.869728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.869785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.869976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.870003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.870206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.870235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.870446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.870472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.870625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.870652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.870803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.870829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.871031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.871066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.871269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.871296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.871423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.871460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.722 [2024-07-26 12:25:53.871613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.722 [2024-07-26 12:25:53.871657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.722 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.871829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.871856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.872052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.872095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.872293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.872322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.872498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.872524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.872700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.872729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.872920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.872949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.873120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.873147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.873306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.873332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.873491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.873518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.873684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.873715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.873923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.873965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.874144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.874170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.874297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.874323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.874458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.874485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.874640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.874681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.874856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.874882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.875035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.875066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.875197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.875224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.875379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.875405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.875582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.875611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.875744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.875787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.875967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.875994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.876149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.876178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.876364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.876391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.876544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.876571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.876704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.876732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.876884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.876911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.877103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.877131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.877294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.877321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.877497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.877526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.877695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.877723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.877922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.877951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.878096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.878123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.878271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.878297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.878448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.878475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.723 [2024-07-26 12:25:53.878635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.723 [2024-07-26 12:25:53.878663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.723 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.878853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.878880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.879088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.879136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.879288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.879315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.879481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.879508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.879660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.879704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.879905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.879934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.880101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.880127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.880296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.880324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.880506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.880535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.880714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.880740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.880943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.880972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.881165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.881192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.881371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.881398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.881564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.881598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.881770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.881799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.882012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.882039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.882179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.882205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.882377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.882406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.882562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.882588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.882785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.882814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.882981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.883013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.883209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.883236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.883389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.883416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.883569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.883595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.883747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.883773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.883945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.883974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.884131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.884158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.884317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.884344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.884537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.884567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.884737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.884767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.884932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.884964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.724 qpair failed and we were unable to recover it. 00:25:00.724 [2024-07-26 12:25:53.885123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.724 [2024-07-26 12:25:53.885149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.885266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.885292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.885431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.885457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.885661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.885691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.885857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.885889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.886068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.886095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.886271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.886300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.886476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.886506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.886680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.886707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.886910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.886939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.887075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.887114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.887263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.887290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.887452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.887495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.887696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.887725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.887891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.887917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.888046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.888087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.888242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.888286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.888468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.888494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.888659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.888688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.888867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.888893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.889069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.889095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.889267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.889295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.889452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.889484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.889666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.889693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.889869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.889897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.890074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.890109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.890259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.890284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.890461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.725 [2024-07-26 12:25:53.890490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.725 qpair failed and we were unable to recover it. 00:25:00.725 [2024-07-26 12:25:53.890656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.890685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.890848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.890878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.891024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.891054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.891236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.891262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.891440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.891466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.891641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.891672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.891845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.891874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.892046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.892079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.892274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.892303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.892500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.892528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.892697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.892723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.892892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.892921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.893094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.893124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.893301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.893328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.893534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.893563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.893709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.893737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.893909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.893936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.894139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.894168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.894353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.894380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.894526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.894553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.894724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.894753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.894926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.894956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.895128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.895155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.895326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.895357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.895555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.895584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 2978740 Killed "${NVMF_APP[@]}" "$@" 00:25:00.726 [2024-07-26 12:25:53.895730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.895756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.895903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.895930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:25:00.726 [2024-07-26 12:25:53.896110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.896140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:25:00.726 [2024-07-26 12:25:53.896304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.896333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:00.726 [2024-07-26 12:25:53.896489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.896517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:00.726 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.726 [2024-07-26 12:25:53.896671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.896697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.896866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.896895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.726 [2024-07-26 12:25:53.897100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.726 [2024-07-26 12:25:53.897130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.726 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.897284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.897310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.897493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.897537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.897707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.897735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.897907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.897933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.898099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.898128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.898291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.898319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.898502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.898528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.898688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.898713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.898867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.898895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.899034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.899069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.899251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.899277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.899512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.899568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.899751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.899777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.899972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.900001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.900195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.900224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.900394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.900432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.900619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.900652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.900822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.900859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.901013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.901039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.901202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.901228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.901359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.901386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.901544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.901571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=2979286 00:25:00.727 [2024-07-26 12:25:53.901740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.901771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:25:00.727 [2024-07-26 12:25:53.901950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.901980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 2979286 00:25:00.727 [2024-07-26 12:25:53.902121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.902148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@831 -- # '[' -z 2979286 ']' 00:25:00.727 [2024-07-26 12:25:53.902294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.902337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:00.727 [2024-07-26 12:25:53.902486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.902516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:00.727 [2024-07-26 12:25:53.902688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.902716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.902893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:00.727 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:00.727 [2024-07-26 12:25:53.902925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.903114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:00.727 [2024-07-26 12:25:53.903160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 12:25:53 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:00.727 [2024-07-26 12:25:53.903321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.727 [2024-07-26 12:25:53.903358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.727 qpair failed and we were unable to recover it. 00:25:00.727 [2024-07-26 12:25:53.903891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.903924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.904130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.904158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.904317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.904343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.904485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.904516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.904669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.904713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.904892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.904919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.905096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.905132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.905330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.905367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.905542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.905571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.905727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.905756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.905931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.905960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.906149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.906178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.906337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.906377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.906509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.906535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.906697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.906723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.906849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.906874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.907025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.907088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.907225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.907251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.907380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.907406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.907610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.907640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.907788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.907814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.907974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.908000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.908175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.908205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.908380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.908406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.908604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.908633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.908773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.908802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.908975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.909002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.909190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.909219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.909382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.909408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.909561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.909586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.909724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.909750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.909883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.909909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.910077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.910104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.910239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.910265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.910422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.728 [2024-07-26 12:25:53.910476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.728 qpair failed and we were unable to recover it. 00:25:00.728 [2024-07-26 12:25:53.910646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.910672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.910827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.910854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.911024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.911053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.911231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.911257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.911385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.911428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.911574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.911602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.911761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.911787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.911966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.911991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.912182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.912212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.912369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.912396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.912579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.912608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.912803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.912851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.913027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.913056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.913248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.913276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.913449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.913477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:00.729 [2024-07-26 12:25:53.913628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:00.729 [2024-07-26 12:25:53.913653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:00.729 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.913846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.913874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.914018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.914050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.914215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.914241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.914369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.914413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.914561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.914590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.914763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.914793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.914918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.914962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.915164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.915191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.915320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.915345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.915502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.915528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.915675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.915701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.915859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.915885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.916065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.916094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.916271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.916297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.916449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.916474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.916622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.916650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.916825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.916851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.016 qpair failed and we were unable to recover it. 00:25:01.016 [2024-07-26 12:25:53.917003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.016 [2024-07-26 12:25:53.917028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.917181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.917207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.917346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.917375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.917504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.917530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.917680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.917705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.917830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.917855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.918005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.918036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.918219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.918245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.918370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.918395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.918583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.918608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.918775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.918805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.918977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.919007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.919164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.919190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.919345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.919372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.919532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.919561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.919713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.919740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.919912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.919942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.920107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.920137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.920294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.920319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.920476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.920502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.920651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.920679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.920834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.920860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.920990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.921017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.921211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.921236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.921388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.921414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.921588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.921617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.921769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.921795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.921950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.921976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.922164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.922198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.922362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.922391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.922578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.922603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.922773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.922801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.922991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.923020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.923211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.923238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.017 [2024-07-26 12:25:53.923412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.017 [2024-07-26 12:25:53.923442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.017 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.923629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.923674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.923856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.923884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.924039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.924073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.924261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.924287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.924468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.924496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.924630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.924656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.924855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.924886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.925067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.925094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.925263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.925292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.925470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.925501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.925678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.925704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.925903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.925931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.926146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.926174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.926341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.926367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.926505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.926550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.926716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.926744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.926910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.926936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.927076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.927102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.927230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.927256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.927411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.927437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.927632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.927671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.927830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.927857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.927983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.928019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.928208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.928236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.928369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.928394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.928572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.928599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.928733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.928760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.928892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.928919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.929097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.929264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.929290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.929437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.929463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.929596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.929622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.929743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.929770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.929934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.929960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.018 [2024-07-26 12:25:53.930095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.018 [2024-07-26 12:25:53.930124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.018 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.930275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.930301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.930480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.930508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.930641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.930668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.930827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.930852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.931000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.931026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.931220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.931249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.931405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.931431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.931611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.931636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.931796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.931822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.931956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.931983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.932139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.932166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.932325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.932353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.932515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.932541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.932694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.932720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.932882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.932908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.933029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.933056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.933215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.933241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.933402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.933428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.933553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.933580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.933736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.933762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.933934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.933960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.934103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.934129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.934262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.934289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.934476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.934502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.934654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.934679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.934810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.934840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.934998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.935026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.935190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.935216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.935363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.935390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.935514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.935540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.935675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.935701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.935857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.935882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.936049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.936081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.936202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.936228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.019 [2024-07-26 12:25:53.936378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.019 [2024-07-26 12:25:53.936404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.019 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.936537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.936562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.936723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.936749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.936875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.936901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.937064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.937090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.937257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.937283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.937464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.937490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.937657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.937683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.937834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.937862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.938019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.938045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.938207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.938234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.938403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.938428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.938555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.938581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.938728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.938754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.938930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.938955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.939091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.939117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.939272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.939298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.939431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.939456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.939606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.939631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.939796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.939821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.939953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.939979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.940133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.940159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.940339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.940365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.940496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.940522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.940642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.940668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.940818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.940844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.940999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.941025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.941180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.941206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.941334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.941361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.941484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.941509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.941692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.941718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.941871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.941900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.942054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.942086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.942221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.942247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.942367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.020 [2024-07-26 12:25:53.942392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.020 qpair failed and we were unable to recover it. 00:25:01.020 [2024-07-26 12:25:53.942519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.942545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.942719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.942745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.942895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.942920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.943074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.943101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.943290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.943318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.943460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.943486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.943610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.943638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.943784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.943809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.943935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.943960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.944104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.944130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.944316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.944341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.944496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.944524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.944677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.944703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.944879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.944905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.945037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.945068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.945231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.945259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.945426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.945454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.945605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.945631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.945774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.945801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.945882] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:25:01.021 [2024-07-26 12:25:53.945935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.945951] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:01.021 [2024-07-26 12:25:53.945962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.946103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.946127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.946252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.946276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.946422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.946447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.946581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.946607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.946765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.946790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.946971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.946997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.947132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.947158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.947292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.021 [2024-07-26 12:25:53.947319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.021 qpair failed and we were unable to recover it. 00:25:01.021 [2024-07-26 12:25:53.947523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.947561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.947741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.947770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.947929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.947955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.948088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.948115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.948275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.948302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.948502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.948532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.948670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.948695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.948859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.948886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.949043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.949074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.949245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.949270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.949421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.949447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.949575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.949602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.949728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.949753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.949882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.949909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.950072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.950099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.950265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.950294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.950453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.950479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.950636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.950663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.950798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.950824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.950958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.950984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.951139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.951171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.951307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.951333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.951487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.951513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.951671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.951697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.951824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.951851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.952030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.952055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.952218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.952246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.952373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.952398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.952535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.952562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.952701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.952727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.952898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.952925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.953074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.953104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.953256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.953282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.953408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.953433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.953600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.022 [2024-07-26 12:25:53.953626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.022 qpair failed and we were unable to recover it. 00:25:01.022 [2024-07-26 12:25:53.953751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.953776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.953927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.953952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.954127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.954153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.954308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.954334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.954465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.954491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.954674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.954699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.954852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.954878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.955039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.955070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.955206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.955232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.955411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.955437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.955568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.955593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.955751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.955776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.955963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.955989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.956118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.956144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.956303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.956329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.956487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.956513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.956666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.956692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.956820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.956847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.956970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.956997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.957136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.957163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.957299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.957325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.957479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.957505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.957656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.957682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.957855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.957881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.958006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.958033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.958193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.958224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.958383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.958409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.958565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.958592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.958744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.958773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.958930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.958956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.959101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.959127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.959282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.959309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.959442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.959468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.959623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.959650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.023 qpair failed and we were unable to recover it. 00:25:01.023 [2024-07-26 12:25:53.959828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.023 [2024-07-26 12:25:53.959854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.959993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.960019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.960182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.960208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.960356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.960382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.960516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.960542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.960673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.960699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.960859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.960885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.961035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.961069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.961206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.961232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.961386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.961412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.961560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.961587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.961743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.961767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.961900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.961926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.962078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.962105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.962287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.962313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.962486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.962512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.962673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.962698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.962878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.962906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.963036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.963072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.963204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.963231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.963364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.963389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.963544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.963570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.963699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.963727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.963865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.963891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.964042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.964079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.964237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.964263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.964393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.964419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.964570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.964596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.964748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.964774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.964906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.964932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.965068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.965094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.965256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.965286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.965444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.965469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.965646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.965672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.965823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.965850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.024 [2024-07-26 12:25:53.966008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.024 [2024-07-26 12:25:53.966033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.024 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.966187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.966216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.966377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.966403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.966529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.966555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.966730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.966756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.966890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.966916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.967078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.967105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.967222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.967248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.967378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.967403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.967524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.967550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.967708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.967734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.967888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.967914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.968035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.968065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.968202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.968228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.968384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.968409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.968570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.968596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.968755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.968781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.968914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.968940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.969074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.969100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.969226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.969253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.969385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.969411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.969567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.969592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.969744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.969769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.969956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.969982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.970145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.970171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.970303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.970329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.970478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.970506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.970676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.970702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.970824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.970849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.970996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.971021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.971186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.971213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.971342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.971369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.971531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.971557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.971709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.971735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.971903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.971928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.972082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.025 [2024-07-26 12:25:53.972109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.025 qpair failed and we were unable to recover it. 00:25:01.025 [2024-07-26 12:25:53.972292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.972322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.972451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.972476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.972651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.972677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.972806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.972834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.973015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.973041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.973177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.973203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.973357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.973386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.973506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.973532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.973706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.973733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.973866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.973892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.974016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.974042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.974181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.974207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.974327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.974353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.974481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.974510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.974670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.974697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.974852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.974879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.975029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.975054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.975250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.975276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.975403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.975429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.975607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.975632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.975751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.975777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.975914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.975941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.976106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.976133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.976266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.976291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.976425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.976450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.976630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.976655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.976788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.976813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.976967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.976997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.977162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.977188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.977319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.977344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.026 qpair failed and we were unable to recover it. 00:25:01.026 [2024-07-26 12:25:53.977522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.026 [2024-07-26 12:25:53.977548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.977675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.977701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.977834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.977860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 EAL: No free 2048 kB hugepages reported on node 1 00:25:01.027 [2024-07-26 12:25:53.978037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.978069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.978225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.978251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.978426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.978452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.978575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.978600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.978728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.978757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.978883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.978913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.979079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.979104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.979251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.979281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.979407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.979433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.979588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.979614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.979771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.979797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.979954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.979981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.980117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.980143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.980277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.980303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.980457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.980482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.980664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.980692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.980854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.980880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.981035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.981066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.981227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.981252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.981399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.981424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.981551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.981577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.981735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.981760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.981913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.981939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.982094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.982120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.982278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.982303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.982481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.982506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.982633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.982658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.982784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.982810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.982937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.982962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.983091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.983117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.983234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.027 [2024-07-26 12:25:53.983259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.027 qpair failed and we were unable to recover it. 00:25:01.027 [2024-07-26 12:25:53.983398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.983422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.983597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.983623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.983768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.983793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.983930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.983961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.984143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.984170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.984344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.984379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.984500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.984525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.984655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.984680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.984845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.984869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.985020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.985047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.985210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.985235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.985402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.985427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.985559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.985585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.985759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.985784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.985906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.985931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.986070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.986097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.986255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.986285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.986448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.986474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.986622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.986647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.986805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.986830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.986985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.987010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.987179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.987206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.987342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.987367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.987497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.987523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.987647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.987673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.987803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.987828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.988001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.988026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.988191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.988216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.988337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.988363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.988529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.988555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.988713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.988739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.988864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.988889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.989043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.989075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.989229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.989254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.989406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.989431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.028 qpair failed and we were unable to recover it. 00:25:01.028 [2024-07-26 12:25:53.989584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.028 [2024-07-26 12:25:53.989609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.989737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.989763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.989918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.989943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.990118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.990144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.990276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.990302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.990454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.990479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.990600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.990625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.990743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.990768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.990957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.990983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.991137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.991162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.991338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.991364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.991519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.991545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.991673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.991698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.991855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.991883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.992005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.992030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.992176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.992201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.992328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.992353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.992522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.992547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.992701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.992726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.992882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.992907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.993027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.993053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.993218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.993248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.993376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.993401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.993580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.993605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.993729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.993755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.993881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.993906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.994026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.994052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.994224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.994249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.994413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.994438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.994597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.994621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.994742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.994766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.994891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.994918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.995088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.995118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.995243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.995269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.995395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.029 [2024-07-26 12:25:53.995421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.029 qpair failed and we were unable to recover it. 00:25:01.029 [2024-07-26 12:25:53.995580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.995606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.995772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.995797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.995930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.995957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.996114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.996139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.996316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.996341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.996515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.996540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.996659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.996684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.996862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.996887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.997015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.997042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.997189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.997214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.997387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.997412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.997540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.997565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.997720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.997745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.997869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.997894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.998132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.998158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.998293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.998319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.998475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.998501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.998630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.998655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.998829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.998853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.998980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.999004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.999194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.999233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.999381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.999409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.999535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.999560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.999684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.999709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:53.999833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:53.999858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.000011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.000036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.000209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.000240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.000379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.000404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.000552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.000577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.000710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.000736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.000865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.000889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.001019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.001044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.001220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.001245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.001419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.001444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.001599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.030 [2024-07-26 12:25:54.001625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.030 qpair failed and we were unable to recover it. 00:25:01.030 [2024-07-26 12:25:54.001784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.001808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.001936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.001962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.002121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.002147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.002291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.002316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.002480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.002506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.002636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.002662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.002824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.002849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.003003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.003027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.003183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.003208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.003365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.003390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.003523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.003548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.003720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.003745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.003872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.003898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.004052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.004082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.004236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.004261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.004399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.004428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.004583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.004610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.004766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.004796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.004957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.004983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.005131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.005157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.005283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.005308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.005488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.005513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.005642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.005667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.005796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.005821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.005971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.005997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.006153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.006179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.006304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.006331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.006487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.006513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.006663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.006689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.006818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.006844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.031 qpair failed and we were unable to recover it. 00:25:01.031 [2024-07-26 12:25:54.007076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.031 [2024-07-26 12:25:54.007101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.007238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.007267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.007401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.007426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.007555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.007579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.007709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.007734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.007882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.007908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.008064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.008089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.008221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.008246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.008378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.008403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.008537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.008561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.008686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.008711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.008859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.008885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.009036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.009066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.009295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.009320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.009508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.009533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.009692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.009718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.009874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.009900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.010032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.010057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.010190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.010215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.010368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.010393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.010548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.010574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.010697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.010723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.010950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.010975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.011104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.011129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.011359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.011384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.011531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.011556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.011703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.011728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.011896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.011921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.012070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.012095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.012226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.012251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.012408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.012433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.012550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.012574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.012632] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:01.032 [2024-07-26 12:25:54.012714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.012739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.012969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.012993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.013174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.013199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.032 qpair failed and we were unable to recover it. 00:25:01.032 [2024-07-26 12:25:54.013335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.032 [2024-07-26 12:25:54.013361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.013508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.013532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.013712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.013737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.013884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.013909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.014080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.014119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.014280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.014307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.014482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.014514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.014665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.014691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.014845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.014872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.015028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.015054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.015194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.015220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.015440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.015465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.015699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.015725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.015859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.015887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.016034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.016066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.016233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.016259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.016409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.016436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.016588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.016613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.016766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.016792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.016918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.016945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.017101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.017128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.017282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.017309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.017436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.017462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.017592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.017619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.017767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.017792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.017915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.017941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.018089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.018116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.018268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.018294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.018447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.018473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.018706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.018731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.018865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.018890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.019052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.019083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.019211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.019238] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.019475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.019501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.019661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.019687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.019816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.033 [2024-07-26 12:25:54.019842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.033 qpair failed and we were unable to recover it. 00:25:01.033 [2024-07-26 12:25:54.019996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.020021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.020149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.020176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.020381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.020407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.020534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.020559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.020739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.020765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.020914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.020939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.021095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.021122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.021302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.021328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.021483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.021509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.021740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.021765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.021992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.022023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.022186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.022212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.022372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.022399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.022551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.022576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.022731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.022757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.022913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.022939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.023071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.023096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.023226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.023251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.023401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.023427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.023592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.023616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.023863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.023889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.024020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.024047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.024206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.024231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.024367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.024393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.024550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.024577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.024739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.024769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.024928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.024954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.025080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.025106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.025241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.025267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.025416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.025441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.025567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.025592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.025715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.025739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.025868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.025894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.026021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.026048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.026204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.034 [2024-07-26 12:25:54.026230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.034 qpair failed and we were unable to recover it. 00:25:01.034 [2024-07-26 12:25:54.026388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.026414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.026544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.026568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.026808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.026833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.027008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.027032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.027171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.027197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.027330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.027355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.027503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.027658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.027684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.027817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.027842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.028026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.028214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.028373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.028523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.028704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.028874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.028998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.029026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.029177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.029202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.029329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.029355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.029484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.029508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.029736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.029761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.029914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.029941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.030098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.030125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.030250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.030276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.030460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.030485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.030637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.030661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.030779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.030805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.030926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.030951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.031100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.031126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.031250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.031276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.031511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.031536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.031707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.031732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.031856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.031881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.032034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.032064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.032187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.032212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.032368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.035 [2024-07-26 12:25:54.032393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.035 qpair failed and we were unable to recover it. 00:25:01.035 [2024-07-26 12:25:54.032547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.032571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.032699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.032724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.032874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.032900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.033028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.033054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.033212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.033237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.033370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.033394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.033554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.033579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.033716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.033742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.033899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.033925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.034051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.034083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.034215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.034242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.034386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.034411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.034535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.034561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.034737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.034762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.034915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.034940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.035071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.035096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.035221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.035245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.035423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.035448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.035576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.035601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.035746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.035772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.035902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.035931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.036074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.036099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.036220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.036245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.036475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.036500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.036680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.036705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.036859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.036886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.037051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.037082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.037220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.037244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.037373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.037398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.037524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.037549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.036 [2024-07-26 12:25:54.037679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.036 [2024-07-26 12:25:54.037703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.036 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.037825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.037850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.038005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.038031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.038221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.038246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.038386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.038411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.038542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.038566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.038715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.038750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.038926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.038950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.039113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.039139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.039294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.039319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.039465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.039489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.039648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.039673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.039798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.039823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.039972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.039997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.040135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.040161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.040289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.040314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.040434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.040460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.040615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.040640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.040775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.040800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.040928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.040952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.041081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.041106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.041236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.041261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.041417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.041442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.041595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.041620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.041770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.041795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.041917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.041941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.042103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.042128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.042281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.042306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.042451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.042476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.042614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.042639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.042821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.042850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.042977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.043002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.043152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.043177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.043302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.043327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.043446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.043471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.037 qpair failed and we were unable to recover it. 00:25:01.037 [2024-07-26 12:25:54.043627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.037 [2024-07-26 12:25:54.043653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.043778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.043803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.043927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.043952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.044127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.044152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.044300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.044325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.044449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.044473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.044647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.044672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.044823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.044847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.044993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.045018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.045148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.045173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.045350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.045375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.045521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.045546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.045718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.045742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.045863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.045889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.046048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.046088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.046239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.046264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.046394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.046420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.046556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.046581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.046731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.046756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.046879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.046904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.047056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.047089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.047222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.047248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.047409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.047435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.047587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.047612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.047744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.047770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.047893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.047918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.048149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.048175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.048356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.048381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.048507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.048533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.048692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.048717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.048849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.048874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.049025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.049049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.049184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.049210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.049364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.049389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.049543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.049568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.038 [2024-07-26 12:25:54.049692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.038 [2024-07-26 12:25:54.049722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.038 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.049881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.049907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.050088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.050114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.050241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.050266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.050419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.050444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.050571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.050596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.050727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.050752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.050880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.050905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.051053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.051085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.051208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.051234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.051385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.051410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.051559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.051584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.051758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.051783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.051926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.051950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.052085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.052111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.052232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.052256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.052434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.052459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.052617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.052642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.052793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.052817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.052965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.052991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.053144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.053170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.053403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.053427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.053584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.053609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.053760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.053785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.053913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.053938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.054120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.054146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.054302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.054327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.054464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.054489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.054618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.054642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.054797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.054822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.054970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.054995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.055129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.055154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.055335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.055359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.055495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.055519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.055686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.055711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.039 [2024-07-26 12:25:54.055837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.039 [2024-07-26 12:25:54.055861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.039 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.056017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.056042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.056222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.056247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.056427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.056452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.056606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.056631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.056763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.056792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.056945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.056970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.057101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.057127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.057256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.057281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.057433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.057457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.057609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.057634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.057782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.057806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.057960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.057984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.058157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.058183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.058341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.058365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.058490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.058515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.058636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.058662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.058789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.058814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.058958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.058983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.059140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.059166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.059285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.059311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.059485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.059510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.059637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.059662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.059833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.059858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.059984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.060010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.060163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.060188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.060342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.060368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.060542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.060567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.060714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.060739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.060889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.060914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.061082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.061107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.061263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.061288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.061406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.061431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.061613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.061637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.061756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.061782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.061936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.040 [2024-07-26 12:25:54.061961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.040 qpair failed and we were unable to recover it. 00:25:01.040 [2024-07-26 12:25:54.062115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.062140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.062275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.062301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.062475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.062500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.062646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.062671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.062802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.062828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.062979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.063004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.063136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.063161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.063310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.063334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.063492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.063517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.063635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.063664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.063822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.063848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.064003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.064027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.064187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.064213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.064337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.064362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.064514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.064539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.064696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.064721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.064853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.064878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.065034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.065066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.065219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.065245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.065408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.065434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.065585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.065610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.065761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.065786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.065930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.065955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.066095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.066121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.066273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.066298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.066427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.066453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.066606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.066632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.066789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.066814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.066946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.066971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.067120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.067146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.041 qpair failed and we were unable to recover it. 00:25:01.041 [2024-07-26 12:25:54.067291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.041 [2024-07-26 12:25:54.067317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.067461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.067487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.067641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.067666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.067834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.067858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.067984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.068008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.068173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.068199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.068326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.068352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.068501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.068527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.068679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.068705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.068828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.068853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.069030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.069054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.069203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.069229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.069379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.069404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.069527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.069553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.069711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.069736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.069863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.069889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.070016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.070042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.070211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.070236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.070364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.070388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.070537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.070566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.070733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.070757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.070885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.070909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.071043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.071075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.071252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.071277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.071398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.071423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.071555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.071580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.071734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.071760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.071886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.071911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.072032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.072064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.072271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.072297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.072526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.072551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.072699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.072724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.072953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.072978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.073135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.073161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.073336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.073361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.042 qpair failed and we were unable to recover it. 00:25:01.042 [2024-07-26 12:25:54.073514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.042 [2024-07-26 12:25:54.073540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.073660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.073685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.073844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.073869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.074017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.074041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.074177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.074203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.074451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.074476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.074657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.074682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.074816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.074841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.074994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.075020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.075183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.075210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.075355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.075380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.075511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.075537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.075695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.075720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.075845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.075870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.076004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.076029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.076241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.076267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.076393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.076419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.076572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.076599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.076832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.076858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.077024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.077049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.077300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.077326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.077482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.077506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.077666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.077691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.077819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.077843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.077998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.078027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.078188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.078214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.078340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.078365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.078517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.078542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.078679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.078704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.078857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.078882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.079089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.079117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.079271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.079295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.079428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.079453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.079606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.079632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.043 [2024-07-26 12:25:54.079787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.043 [2024-07-26 12:25:54.079812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.043 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.080043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.080074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.080227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.080252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.080402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.080427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.080613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.080640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.080770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.080794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.080949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.080974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.081116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.081142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.081266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.081291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.081442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.081468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.081667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.081693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.081863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.081888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.082014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.082040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.082203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.082227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.082359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.082385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.082534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.082559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.082738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.082763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.082976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.083001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.083161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.083186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.083340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.083365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.083515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.083541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.083687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.083712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.083847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.083872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.084008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.084034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.084166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.084193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.084356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.084381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.084527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.084553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.084683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.084708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.084864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.084890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.085045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.085088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.085223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.085252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.085410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.085436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.085560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.085586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.085715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.085741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.085898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.085923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.086075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.086101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.044 [2024-07-26 12:25:54.086229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.044 [2024-07-26 12:25:54.086254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.044 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.086383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.086408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.086532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.086557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.086785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.086810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.086963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.086987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.087132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.087158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.087310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.087335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.087455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.087480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.087642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.087667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.087794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.087820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.087978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.088003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.088171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.088197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.088317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.088343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.088498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.088525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.088682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.088708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.088864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.088890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.089019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.089044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.089185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.089211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.089340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.089366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.089523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.089548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.089707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.089732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.089909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.089934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.090078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.090104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.090277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.090303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.090494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.090519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.090671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.090695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.090825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.090850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.091000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.091024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.091199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.091228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.091384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.091409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.091539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.091564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.091716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.091741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.091897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.091924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.092049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.092081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.092240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.092270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.092430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.045 [2024-07-26 12:25:54.092455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.045 qpair failed and we were unable to recover it. 00:25:01.045 [2024-07-26 12:25:54.092584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.092610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.092748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.092774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.092923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.092948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.093079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.093105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.093254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.093279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.093398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.093423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.093563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.093589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.093721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.093746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.093926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.093951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.094095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.094127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.094272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.094297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.094430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.094455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.094594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.094619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.094749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.094774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.094927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.094951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.095110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.095135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.095285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.095310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.095464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.095490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.095635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.095660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.095786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.095812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.095961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.095986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.096117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.096143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.096265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.096290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.096445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.096469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.096699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.096724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.096884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.096909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.097035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.097067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.097230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.097255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.097384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.097409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.097540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.097565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.046 qpair failed and we were unable to recover it. 00:25:01.046 [2024-07-26 12:25:54.097691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.046 [2024-07-26 12:25:54.097717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.097845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.097872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.098029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.098055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.098307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.098332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.098487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.098512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.098636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.098660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.098785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.098810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.098956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.098980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.099129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.099154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.099360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.099385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.099531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.099556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.099696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.099721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.099855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.099879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.100028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.100054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.100193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.100218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.100374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.100400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.100553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.100578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.100756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.100781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.100906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.100930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.101052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.101082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.101225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.101251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.101379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.101404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.101542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.101567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.101701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.101726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.101856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.101881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.102019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.102043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.102235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.102260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.102413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.102438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.102554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.102578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.102736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.102760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.102889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.102914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.103065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.103090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.103242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.103268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.103420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.103444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.103607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.047 [2024-07-26 12:25:54.103631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.047 qpair failed and we were unable to recover it. 00:25:01.047 [2024-07-26 12:25:54.103752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.103780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.103934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.103959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.104136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.104161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.104337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.104362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.104542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.104567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.104681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.104706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.104856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.104881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.105109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.105134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.105261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.105285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.105412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.105438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.105601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.105625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.105854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.105879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.106032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.106057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.106196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.106222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.106382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.106407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.106572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.106597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.106729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.106753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.106896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.106921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.107051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.107081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.107236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.107261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.107413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.107438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.107606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.107631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.107810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.107835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.108005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.108030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.108163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.108189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.108344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.108368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.108497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.108522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.108689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.108714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.108846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.108871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.109033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.109064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.109216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.109241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.109366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.109392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.109568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.109593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.109743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.109769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.109917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.109942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.048 qpair failed and we were unable to recover it. 00:25:01.048 [2024-07-26 12:25:54.110087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.048 [2024-07-26 12:25:54.110114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.110263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.110288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.110416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.110441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.110571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.110596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.110745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.110770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.110944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.110973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.111123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.111149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.111338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.111363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.111521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.111545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.111691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.111716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.111856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.111880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.112033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.112062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.112189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.112214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.112368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.112394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.112552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.112576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.112756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.112781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.112908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.112933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.113066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.113090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.113247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.113272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.113433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.113459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.113610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.113635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.113767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.113791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.113941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.113966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.114119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.114144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.114304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.114329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.114459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.114483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.114640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.114664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.114812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.114836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.115071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.115096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.115225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.115249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.115478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.115504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.115651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.115676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.115838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.115862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.116010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.116035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.116171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.116197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.116343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.049 [2024-07-26 12:25:54.116368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.049 qpair failed and we were unable to recover it. 00:25:01.049 [2024-07-26 12:25:54.116526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.116551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.116676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.116701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.116827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.116851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.116983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.117144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.117302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.117487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.117664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.117810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.117961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.117989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.118148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.118173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.118402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.118427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.118556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.118581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.118753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.118777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.118909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.118934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.119099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.119124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.119277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.119301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.119422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.119447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.119567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.119591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.119738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.119763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.119893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.119918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.120068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.120093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.120215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.120239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.120399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.120424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.120552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.120576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.120730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.120755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.120901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.120926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.121088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.121113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.121270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.121294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.121444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.121469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.121648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.121672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.121798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.121824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.121950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.121976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.122098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.122124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.122271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.122296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.050 [2024-07-26 12:25:54.122443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.050 [2024-07-26 12:25:54.122468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.050 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.122589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.122615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.122738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.122763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.122916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.122941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.123079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.123104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.123229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.123254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.123380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.123406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.123528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.123553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.123677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.123703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.123831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.123856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.124000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.124025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.124160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.124185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.124312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.124338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.124498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.124523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.124652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.124681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.124832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.124857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.125936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.125961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.126092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.126117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.126251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.126275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.126410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.126435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.126611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.126636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.126765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.126790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.126925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.126951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.127073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.127099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.127264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.127289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.127440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.051 [2024-07-26 12:25:54.127466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.051 qpair failed and we were unable to recover it. 00:25:01.051 [2024-07-26 12:25:54.127620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.127645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.127797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.127822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.127954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.127979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.128107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.128136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.128322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.128347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.128469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.128494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.128642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.128666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.128849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.128875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.129026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.129183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.129361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.129510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.129697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.129855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.129989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.130163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.130328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.130474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.130620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.130771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.130927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.130952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.131123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.131273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.131421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.131571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.131723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131760] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:01.052 [2024-07-26 12:25:54.131795] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:01.052 [2024-07-26 12:25:54.131809] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:01.052 [2024-07-26 12:25:54.131821] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:01.052 [2024-07-26 12:25:54.131831] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:01.052 [2024-07-26 12:25:54.131848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.131871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.131900] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:25:01.052 [2024-07-26 12:25:54.132000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.132025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.132186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.132144] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:25:01.052 [2024-07-26 12:25:54.132215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.132194] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:25:01.052 [2024-07-26 12:25:54.132197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:25:01.052 [2024-07-26 12:25:54.132348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.132378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.132529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.132553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.052 qpair failed and we were unable to recover it. 00:25:01.052 [2024-07-26 12:25:54.132702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.052 [2024-07-26 12:25:54.132727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.132862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.132891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.133045] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.133075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.133242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.133267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.133414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.133438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.133589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.133613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.133750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.133774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.133906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.133931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.134086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.134110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.134267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.134292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.134431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.134456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.134581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.134605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.134760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.134785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.134928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.134952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.135102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.135127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.135266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.135290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.135466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.135490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.135644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.135668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.135820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.135845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.135968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.135992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.136154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.136179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.136317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.136342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.136496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.136521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.136675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.136700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.136842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.136867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.137025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.137049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.137191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.137216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.137349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.137374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.137554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.137593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.137724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.137752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.137941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.137968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.138125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.138152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.138274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.138300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.138452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.138478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.138602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.138627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.138762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.053 [2024-07-26 12:25:54.138799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.053 qpair failed and we were unable to recover it. 00:25:01.053 [2024-07-26 12:25:54.138959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.138986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.139145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.139172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.139295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.139321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.139485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.139511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.139639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.139665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.139831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.139866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.139995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.140021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.140188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.140214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.140361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.140387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.140545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.140570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.140698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.140724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.140875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.140901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.141034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.141066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.141220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.141245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.141382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.141407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.141590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.141616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.141735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.141760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.141919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.141944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.142075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.142102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.142249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.142274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.142433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.142458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.142619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.142644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.142780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.142807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.142961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.142987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.143116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.143142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.143279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.143305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.143454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.143480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.143611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.143636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.143766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.143791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.143948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.143974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.144100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.144127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.144252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.144277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.144457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.144498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.144674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.144713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.144843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.144872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.145010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.054 [2024-07-26 12:25:54.145036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.054 qpair failed and we were unable to recover it. 00:25:01.054 [2024-07-26 12:25:54.145184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.145211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.145354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.145380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.145506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.145532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.145683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.145708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.145829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.145855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.145984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.146010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.146164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.146190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.146350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.146375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.146509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.146536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.146700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.146726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.146876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.146915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.147119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.147148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.147281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.147306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.147432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.147457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.147602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.147627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.147768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.147792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.147960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.147984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.148142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.148168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.148294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.148318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.148508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.148533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.148663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.148688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.148820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.148844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.149008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.149034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.149175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.149200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.149338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.149363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.149519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.149543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.149701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.149727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.149849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.149874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.150025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.150049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.150179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.150203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.150333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.150359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.150477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.150502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.150632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.055 [2024-07-26 12:25:54.150657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.055 qpair failed and we were unable to recover it. 00:25:01.055 [2024-07-26 12:25:54.150802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.150826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.150985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.151010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.151161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.151186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.151311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.151344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.151527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.151553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.151676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.151701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.151850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.151874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.152019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.152044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.152228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.152253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.152408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.152432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.152552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.152577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.152725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.152749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.152936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.152960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.153082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.153107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.153234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.153258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.153416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.153441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.153664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.153689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.153852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.153877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.154027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.154051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.154199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.154223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.154370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.154395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.154548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.154574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.154707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.154731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.154888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.154914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.155049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.155080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.155303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.155329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.155464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.155491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.155702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.155727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.155902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.155931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.156096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.156121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.156276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.156302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.156467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.156501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.156661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.056 [2024-07-26 12:25:54.156687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.056 qpair failed and we were unable to recover it. 00:25:01.056 [2024-07-26 12:25:54.156850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.156875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.157017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.157042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.157215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.157250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.157426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.157477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.157705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.157745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.157888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.157916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.158079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.158107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.158255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.158282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.158467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.158493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.158630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.158655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.158813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.158847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.159007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.159033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.159183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.159210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.159340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.159365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.159506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.159533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.159696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.159722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.159890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.159916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.160080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.160106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.160265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.160292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.160459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.160485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.160719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.160745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.160901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.160927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.161084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.161112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.161273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.161299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.161436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.161463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.161593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.161620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.161756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.161781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.161932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.161958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.162094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.162121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.162275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.162302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.162487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.162512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.162641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.162667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.162846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.162872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.163031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.163057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.163214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.163240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.057 qpair failed and we were unable to recover it. 00:25:01.057 [2024-07-26 12:25:54.163367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.057 [2024-07-26 12:25:54.163394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.163574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.163600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.163763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.163791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.163925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.163952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.164085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.164112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.164289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.164315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.164451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.164477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.164648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.164675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.164806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.164833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.164986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.165013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.165185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.165211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.165337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.165362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.165495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.165521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.165654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.165680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.165817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.165843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.166090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.166135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.166299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.166325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.166464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.166491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.166638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.166664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.166813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.166838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.167009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.167035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.167175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.167202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.167359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.167385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.167551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.167577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.167737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.167762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.167895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.167921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.168092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.168119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.168275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.168300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.168450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.168477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.168608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.168634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.168763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.168788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.168916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.168942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.169118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.169145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.169294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.169320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.169494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.169519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.058 [2024-07-26 12:25:54.169672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.058 [2024-07-26 12:25:54.169698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.058 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.169854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.169882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.170013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.170038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.170245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.170297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.170468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.170506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.170697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.170731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.170868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.170894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.171066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.171093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.171228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.171254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.171400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.171425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.171551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.171576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.171735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.171761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.171891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.171916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.172073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.172100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.172268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.172294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.172424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.172449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.172597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.172624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.172786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.172813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.172973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.172998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.173168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.173194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.173324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.173353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.173483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.173508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.173688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.173713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.173845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.173870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.174023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.174048] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.174230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.174255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.174421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.174447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.174580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.174605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.174737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.174762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.174948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.174973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.175126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.175152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.175295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.175321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.175476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.175501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.175662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.175687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.175860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.175885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.176017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.176043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.176206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.176236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.059 [2024-07-26 12:25:54.176411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.059 [2024-07-26 12:25:54.176437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.059 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.176559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.176588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.176754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.176782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.176907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.176933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.177088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.177124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.177259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.177285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.177506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.177532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.177690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.177716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.177880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.177906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.178063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.178090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.178290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.178319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.178447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.178473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.178603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.178629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.178783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.178809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.178937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.178962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.179119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.179146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.179267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.179296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.179463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.179490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.179670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.179696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.179858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.179884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.180127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.180154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.180298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.180331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.180464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.180491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.180676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.180706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.180839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.180867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.181018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.181043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.181213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.181239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.181394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.181419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.181580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.181606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.181759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.181784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.181937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.181963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.182089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.182116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.182298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.182324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.182446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.060 [2024-07-26 12:25:54.182472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.060 qpair failed and we were unable to recover it. 00:25:01.060 [2024-07-26 12:25:54.182619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.182644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.182834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.182860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.182983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.183008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.183141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.183167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.183294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.183320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.183484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.183510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.183647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.183674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.183825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.183850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.183981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.184006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.184204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.184256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.184430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.184461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.184649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.184675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.184822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.184847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.184998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.185024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.185168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.185194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.185336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.185363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.185531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.185563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.185742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.185779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.185940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.185968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.186126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.186153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.186281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.186308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.186467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.186494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.186616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.186642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.186814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.186839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.186999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.187026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.187269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.187296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.187430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.187456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.187577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.187603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.187731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.187757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.187890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.187920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.188078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.188108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.188254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.188280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.188424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.188450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.188607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.188633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.188760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.188786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.188939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.188965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.189100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.189128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.189281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.189306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.061 qpair failed and we were unable to recover it. 00:25:01.061 [2024-07-26 12:25:54.189447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.061 [2024-07-26 12:25:54.189474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.189599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.189625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.189781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.189807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.189941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.189967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.190117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.190144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.190306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.190332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.190454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.190479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.190606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.190633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.190765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.190790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.190941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.190967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.191112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.191138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.191287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.191313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.191473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.191500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.191638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.191664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.191820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.191845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.191979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.192005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.192150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.192179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.192312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.192339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f0000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.192485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.192524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.192674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.192702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.192839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.192865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.192996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.193021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.193182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.193216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.193388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.193412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.193567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.193601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.193732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.193757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.193938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.193964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.194098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.194123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.194252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.194277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.194410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.194435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.194577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.194608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.194750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.194789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.194921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.194945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.195112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.195138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.195266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.195291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.195432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.195457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.195613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.195645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.195784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.195809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.062 [2024-07-26 12:25:54.195941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.062 [2024-07-26 12:25:54.195966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.062 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.196096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.196122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.196248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.196272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.196409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.196434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.196565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.196590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.196716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.196740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb500000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.196885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.196916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.197081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.197119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.197244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.197269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.197413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.197440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.197590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.197616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.197760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.197785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.197920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.197945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.198163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.198189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.198316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.198343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.198495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.198521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.198654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.198679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.198810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.198835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.198990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.199015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.199150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.199176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.199321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.199360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.199491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.199519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.199694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.199720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.199839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.199864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.200026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.200051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.200218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.200244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.200384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.200409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.200541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.200567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.200724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.200750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.200875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.200902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.201024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.201049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.201218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.201243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.201367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.201392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.201539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.201564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.201697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.201723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.201860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.201888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.202087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.202113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.202253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.202278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.202401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.202427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.063 qpair failed and we were unable to recover it. 00:25:01.063 [2024-07-26 12:25:54.202580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.063 [2024-07-26 12:25:54.202606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.202740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.202766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.202905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.202932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.203073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.203099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.203230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.203255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.203377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.203402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.203547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.203573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.203726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.203752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.203885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.203912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.204067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.204221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.204369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.204535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.204694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.204851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.204998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.205024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.205159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.205185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.205336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.205361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.205484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.205509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.205673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.205698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.205826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.205852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.205994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.206019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.206199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.206225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.206379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.206404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.206556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.206581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.206712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.206737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.206883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.206910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.207069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.207095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.207220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.207245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.207393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.207419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.207587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.207612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.207764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.207789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.207916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.207941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.208080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.208105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.208231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.208256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.208402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.208430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.208588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.208615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.208756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.208781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.208934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.208959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.209100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.209126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.064 [2024-07-26 12:25:54.209287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.064 [2024-07-26 12:25:54.209312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.064 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.209469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.209496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.209632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.209658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.209822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.209847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.209990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.210159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.210333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.210502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.210648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.210809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.210972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.210997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.211129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.211155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.211279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.211304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.211461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.211486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.211616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.211641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.211778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.211803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.211928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.211953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.212087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.212113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.212233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.212258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.212382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.212408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.212558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.212583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.212736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.212761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.212923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.212963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.213164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.213192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.213325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.213351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.213482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.213507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.213621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.213646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.213832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.213858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.213989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.214015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.065 qpair failed and we were unable to recover it. 00:25:01.065 [2024-07-26 12:25:54.214154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.065 [2024-07-26 12:25:54.214180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.214310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.214335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.214485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.214510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.214668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.214693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.214815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.214840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.214991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.215016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.215143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.215169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.215304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.215331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.215456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.215481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.215605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.215630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.215835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.215860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.216016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.216041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.216185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.216210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.216366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.216391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.216540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.216565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.216694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.216719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.216882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.216907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.217046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.217084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.217233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.217259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.217417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.217442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.217565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.217590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.217721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.217746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.217867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.217892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.218046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.218078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.218207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.218233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.218393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.218418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.066 [2024-07-26 12:25:54.218571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.066 [2024-07-26 12:25:54.218596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.066 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.218724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.218749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.218909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.218933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.219075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.219115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.219254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.219280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.219437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.219463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.219583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.219608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.219732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.219757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.219931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.219957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.220116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.220142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.220262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.220287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.220435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.220460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.220585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.220612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.220735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.220761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.220907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.220932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.221056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.221087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.221216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.221241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.221396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.221421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.221547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.221572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.221702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.221727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.221858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.221883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.222019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.222046] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.222180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.222207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.222364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.222390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.222514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.222540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.222662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.222687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.067 qpair failed and we were unable to recover it. 00:25:01.067 [2024-07-26 12:25:54.222870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.067 [2024-07-26 12:25:54.222894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.223015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.223042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.223183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.223208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.223360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.223385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.223567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.223592] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.223722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.223747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.223894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.223919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.224096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.224245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.224397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.224579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.224722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.224866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.224983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.225007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.225139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.225164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.225311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.225336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.225478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.225504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.225627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.225652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.225831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.225855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.225980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.226005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.226133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.226158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.226292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.226317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.226516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.226555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.226685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.226711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.226863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.226889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.227039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.227073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.068 [2024-07-26 12:25:54.227237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.068 [2024-07-26 12:25:54.227264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.068 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.227417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.227443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.227596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.227621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.227740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.227765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.227916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.227942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.228104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.228131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.228265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.228290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.228415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.228440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.228558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.228583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.228734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.228763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.228897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.228922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.229042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.229084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.229219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.229244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.229368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.229393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.229547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.229571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.229698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.229723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.229850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.229875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.230001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.230028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.230184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.230210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.230368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.230394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.230532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.230558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.230690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.230716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.230880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.230905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.231078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.231104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.231240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.231265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.231422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.231446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.231575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.231600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.231738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.069 [2024-07-26 12:25:54.231763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.069 qpair failed and we were unable to recover it. 00:25:01.069 [2024-07-26 12:25:54.231889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.231914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.232072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.232099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.232234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.232258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.232388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.232414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.232569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.232594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.232750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.232775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.232902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.232927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.233095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.233133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.233300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.233327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.233492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.233518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.233651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.233677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.233816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.233842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.234919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.234944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.235070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.235097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.235218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.235243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.235395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.235420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.235577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.235603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.235757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.235783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.235913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.235939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.236066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.236092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.070 [2024-07-26 12:25:54.236245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.070 [2024-07-26 12:25:54.236270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.070 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.236420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.236445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.236571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.236595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.236755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.236780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.236960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.236985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.237151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.237177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.237313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.237338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.237474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.237500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.237630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.237655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.237783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.237812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.237933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.237958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.238112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.238137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.238266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.238291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.238423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.238450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.238580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.238605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.238759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.238783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.238936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.238961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.239093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.239119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.239270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.239295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.239423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.239447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.071 qpair failed and we were unable to recover it. 00:25:01.071 [2024-07-26 12:25:54.239603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.071 [2024-07-26 12:25:54.239629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.239757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.239782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.239906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.239931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.240088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.240113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.240263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.240288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.240443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.240467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.240589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.240613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.240752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.240776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.240905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.240930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.241085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.241110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.241240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.241265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.241385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.241410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.241531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.241556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.241731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.241756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.241887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.241912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.242030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.242055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.242207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.242232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.242381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.242406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.072 [2024-07-26 12:25:54.242562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.072 [2024-07-26 12:25:54.242587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.072 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-26 12:25:54.242711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-26 12:25:54.242736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-26 12:25:54.242875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-26 12:25:54.242900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.343 qpair failed and we were unable to recover it. 00:25:01.343 [2024-07-26 12:25:54.243031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.343 [2024-07-26 12:25:54.243056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.243201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.243226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.243384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.243409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.243562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.243587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.243707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.243732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.243857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.243882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.244088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.244250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.244401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.244547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.244701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.244885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.244998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.245157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.245314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.245465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.245641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.245803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.245953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.245981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.246114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.246140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.246280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.246305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.246441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.246466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.246645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.246670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.246799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.246824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.246949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.246975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.247119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.247144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.247265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.247291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.247467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.247492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.247610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.247635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.247770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.247795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.247917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.247943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.248085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.248110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.248254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.248279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.248401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.248427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.248590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.248615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.344 qpair failed and we were unable to recover it. 00:25:01.344 [2024-07-26 12:25:54.248775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.344 [2024-07-26 12:25:54.248800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.248929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.248955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.249106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.249131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.249270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.249295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.249420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.249445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.249571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.249595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.249725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.249749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.249865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.249890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250043] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.250910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.250935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.251074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.251103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.251514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.251554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.251722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.251747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.251906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.251931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.252092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.252118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.252243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.252268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.252420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.252445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.252572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.252597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.252724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.252749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.252896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.252921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.253096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.253135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.253273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.253300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.253457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.253483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7fb4f8000b90 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.253621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.253647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.253769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.253794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.253917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.253942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.254095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.254121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.254268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.254293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.254452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.254477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.254602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.254627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.345 qpair failed and we were unable to recover it. 00:25:01.345 [2024-07-26 12:25:54.254755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.345 [2024-07-26 12:25:54.254780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.254965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.254990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.255106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.255132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.255285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.255310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.255462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.255487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.255608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.255633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.255811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.255836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.255964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.255989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.256121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.256147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.256267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.256291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.256418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.256443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.256594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.256619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.256748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.256773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.256895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.256919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.257084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.257110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.257236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.257262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.257410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.257435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.257589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.257614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.257744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.257770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.257895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.257921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.258104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.258137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.258256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.258281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.258435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.258460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.258583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.258608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.258744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.258769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.258891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.258916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.259069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.259095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.259225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.259250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.259388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.259413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.259570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.259595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.259726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.259751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.259901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.259926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.260042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.260073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.260208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.260233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.260397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.260422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.346 [2024-07-26 12:25:54.260570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.346 [2024-07-26 12:25:54.260595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.346 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.260729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.260753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.260907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.260932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.261115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.261141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.261288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.261313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.261459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.261484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.261640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.261665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.261793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.261817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.261979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.262004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.262150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.262176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.262309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.262335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.262466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.262491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.262648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.262678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.262827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.262852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.263031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.263198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.263370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.263515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.263660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.263819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.263976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.264132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.264284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.264434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.264607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.264788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.264964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.264989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.265107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.265132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.265259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.265285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.265418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.265444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.265577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.265602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.265741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.265766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.265919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.265945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.266073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.266098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.266217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.266243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.347 [2024-07-26 12:25:54.266368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.347 [2024-07-26 12:25:54.266393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.347 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.266556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.266581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.266727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.266751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.266873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.266898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.267022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.267052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.267184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.267209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.267340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.267365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.267493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.267518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.267636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.267661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:01.348 [2024-07-26 12:25:54.267795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.267831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.267984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.268009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # return 0 00:25:01.348 [2024-07-26 12:25:54.268168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.268195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.268322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.268347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:01.348 [2024-07-26 12:25:54.268483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.268512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.268642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.268667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:01.348 [2024-07-26 12:25:54.268805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.268832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.268993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.348 [2024-07-26 12:25:54.269150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.269298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.269477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.269630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.269788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.269937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.269962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.270087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.270123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.270244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.270269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.270405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.270431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.270558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.270584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.270739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.270765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.348 [2024-07-26 12:25:54.270877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.348 [2024-07-26 12:25:54.270902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.348 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.271018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.271047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.271209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.271234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.271362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.271387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.271512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.271539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.271691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.271717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.271867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.271904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.272033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.272072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.272229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.272255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.272440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.272466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.272595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.272620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.272751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.272776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.272925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.272950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.273109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.273137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.273265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.273290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.273455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.273480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.273603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.273628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.273752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.273778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.273912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.273937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.274054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.274085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.274235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.274261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.274382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.274407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.274542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.274567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.274722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.274748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.274895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.274920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.275046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.275078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.275228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.275254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.275416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.275441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.275556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.275582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.275705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.275730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.275856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.275882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.276016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.276041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.276165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.276191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.276320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.276345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.276467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.276492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.276642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.349 [2024-07-26 12:25:54.276667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.349 qpair failed and we were unable to recover it. 00:25:01.349 [2024-07-26 12:25:54.276831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.276856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.276977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.277163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.277311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.277466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.277617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.277799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.277951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.277976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.278102] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.278127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.278278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.278302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.278423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.278448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.278603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.278629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.278745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.278770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.278914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.278939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.279089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.279114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.279234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.279259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.279394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.279419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.279542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.279567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.279716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.279741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.279891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.279916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.280043] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.280081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.280204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.280230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.280376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.280401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.280522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.280548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.280713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.280739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.280868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.280892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.281044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.281077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.281239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.281264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.281411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.281436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.281585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.281611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.281749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.281774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.281957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.281982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.282115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.282140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.282295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.282324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.282453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.282478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.282641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.282666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.350 qpair failed and we were unable to recover it. 00:25:01.350 [2024-07-26 12:25:54.282794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.350 [2024-07-26 12:25:54.282819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.282966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.282991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.283118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.283145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.283267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.283292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.283418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.283445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.283600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.283625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.283775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.283801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.283937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.283962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.284085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.284110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.284227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.284252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.284392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.284417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.284558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.284586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.284744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.284769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.284903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.284928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.285065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.285090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.285209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.285234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.285366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.285392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.285555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.285581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.285728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.285753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.285871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.285896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.286054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.286086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.286206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.286232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.286380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.286405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.286527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.286552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.286700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.286729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.286856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.286881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.287071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.287223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.287387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.287548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.287692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.287871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.287993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.288018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.288215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.288242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.288365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.288390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.288515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.351 [2024-07-26 12:25:54.288540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.351 qpair failed and we were unable to recover it. 00:25:01.351 [2024-07-26 12:25:54.288674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.288699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.288835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.288860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.289017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.289042] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.289190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.289216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.289364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.289389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.289513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.289539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.289700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.289726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.289852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.289877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.290008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.290034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.290173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.290199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.290328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.290353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.290511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.290536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.290713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.290740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.290873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.290899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.291051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.291240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.291392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.291546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.291690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.291837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.291974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.292145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.292330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.292482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.292654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.292821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.292973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.292998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.293149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.293175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.293320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.293356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:01.352 [2024-07-26 12:25:54.293512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.293539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.293660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.293686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:25:01.352 [2024-07-26 12:25:54.293832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.293865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 [2024-07-26 12:25:54.293996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.294021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.352 [2024-07-26 12:25:54.294173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.352 [2024-07-26 12:25:54.294199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.352 qpair failed and we were unable to recover it. 00:25:01.352 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.352 [2024-07-26 12:25:54.294331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.294359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.294488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.294513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.294638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.294663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.294796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.294821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.294945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.294970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.295125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.295151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.295278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.295304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.295444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.295470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.295592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.295617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.295766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.295791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.295952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.295978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.296127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.296153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.296310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.296336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.296456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.296480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.296606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.296631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.296778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.296804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.296955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.296980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.297129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.297155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.297282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.297307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.297467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.297492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.297611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.297640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.297773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.297799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.297931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.297956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.298079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.298104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.298256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.298281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.298397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.298423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.298577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.298602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.298720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.298745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.298913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.298938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.299101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.299161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.353 [2024-07-26 12:25:54.299316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.353 [2024-07-26 12:25:54.299341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.353 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.299471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.299496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.299617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.299644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.299796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.299821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.299978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.300003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.300131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.300158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.300294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.300319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.300452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.300477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.300619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.300644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.300792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.300816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.300996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.301153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.301311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.301460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.301635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.301785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.301940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.301965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.302140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.302169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.302297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.302322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.302499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.302524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.302673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.302697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.302818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.302843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.302976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.303132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.303291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.303468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.303614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.303787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.303931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.303956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.304096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.304122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.304278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.304303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.304463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.304488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.304637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.304662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.304782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.304806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.304930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.304955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.305078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.305104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.354 qpair failed and we were unable to recover it. 00:25:01.354 [2024-07-26 12:25:54.305220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.354 [2024-07-26 12:25:54.305245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.305369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.305393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.305506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.305531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.305678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.305702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.305854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.305879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.306027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.306052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.306200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.306225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.306383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.306408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.306531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.306560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.306773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.306798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.306948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.306973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.307110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.307135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.307259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.307284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.307409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.307436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.307593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.307618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.307752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.307778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.307910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.307936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.308092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.308126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.308249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.308274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.308388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.308413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.308568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.308593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.308710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.308734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.308901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.308926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.309054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.309084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.309248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.309274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.309425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.309450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.309566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.309591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.309749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.309774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.309899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.309925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.310972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.355 [2024-07-26 12:25:54.310997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.355 qpair failed and we were unable to recover it. 00:25:01.355 [2024-07-26 12:25:54.311143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.311168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.311383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.311408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.311525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.311550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.311698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.311723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.311861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.311886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.312008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.312033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.312236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.312261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.312388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.312413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.312576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.312601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.312725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.312749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.312899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.312924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.313041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.313073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.313215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.313240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.313371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.313396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.313554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.313578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.313713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.313738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.313918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.313943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.314095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.314127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.314249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.314274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.314506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.314531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.314695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.314719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.314842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.314867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.315029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.315054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.315211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.315236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.315366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.315391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.315548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.315579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.315711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.315736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.315890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.315915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316046] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.316076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.316233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.316379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.316533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.316705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.316864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.316978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.317002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.317135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.317164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.356 qpair failed and we were unable to recover it. 00:25:01.356 [2024-07-26 12:25:54.317314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.356 [2024-07-26 12:25:54.317339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.317499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.317524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.317641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.317667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.317797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.317821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.317981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.318152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.318303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.318459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.318641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.318790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 Malloc0 00:25:01.357 [2024-07-26 12:25:54.318968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.318993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.319119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.319145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.357 [2024-07-26 12:25:54.319316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.319341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:25:01.357 [2024-07-26 12:25:54.319487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.319512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.357 [2024-07-26 12:25:54.319645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.319670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.357 [2024-07-26 12:25:54.319799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.319824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.319960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.319986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.320133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.320159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.320299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.320324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.320440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.320465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.320598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.320624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.320784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.320809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.320942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.320967] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.321125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.321151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.321283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.321308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.321449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.321475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.321604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.321628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.321753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.321777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.321930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.321955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.322109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.322134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.322272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.322297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.322445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.322470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.322472] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:01.357 [2024-07-26 12:25:54.322621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.322645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.322794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.322819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.357 [2024-07-26 12:25:54.322946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.357 [2024-07-26 12:25:54.322971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.357 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.323130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.323156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.323285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.323309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.323452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.323477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.323608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.323633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.323814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.323839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.323977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.324002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.324183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.324208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.324362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.324386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.324542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.324567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.324701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.324725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.324883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.324907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.325051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.325220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.325396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.325548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.325704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.325862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.325983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.326137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.326320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.326491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.326647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.326796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.326973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.326998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.327121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.327146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.327298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.327323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.327458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.327483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.327631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.327655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.327796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.327820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.327995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.328020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.358 qpair failed and we were unable to recover it. 00:25:01.358 [2024-07-26 12:25:54.328171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.358 [2024-07-26 12:25:54.328196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.328318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.328343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.328481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.328506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.328638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.328664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.328790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.328814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.328999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.329167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.329317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.329466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.329623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.329803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.329946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.329970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.330103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.330128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.330269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.330294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.330463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.330487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.330625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.330650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.359 [2024-07-26 12:25:54.330775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.330800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:25:01.359 [2024-07-26 12:25:54.330930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.330958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.359 [2024-07-26 12:25:54.331087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.359 [2024-07-26 12:25:54.331123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.331249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.331274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.331426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.331451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.331587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.331611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.331758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.331783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.331910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.331935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.332051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.332081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.332208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.332233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.332354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.332379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.332531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.332556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.332690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.332715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.332844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.332870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.333033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.333067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.333195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.333220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.333349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.333373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.333545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.333570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.333724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.359 [2024-07-26 12:25:54.333749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.359 qpair failed and we were unable to recover it. 00:25:01.359 [2024-07-26 12:25:54.333867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.333892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.334050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.334093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.334238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.334264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.334417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.334442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.334565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.334589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.334748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.334773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.334907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.334933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.335064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.335090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.335223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.335247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.335383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.335408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.335569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.335593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.335708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.335733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.335853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.335878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336050] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.336951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.336975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.337127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.337153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.337293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.337318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.337450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.337480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.337594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.337619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.337757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.337783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.337908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.337933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.338077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.338114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.338239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.338264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.338394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.338419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.338566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.338591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.360 [2024-07-26 12:25:54.338721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.338749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:25:01.360 [2024-07-26 12:25:54.338877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.338903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.360 [2024-07-26 12:25:54.339027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.339052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.339229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.339256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.360 qpair failed and we were unable to recover it. 00:25:01.360 [2024-07-26 12:25:54.339406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.360 [2024-07-26 12:25:54.339432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.339580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.339605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.339769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.339794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.339914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.339939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.340078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.340103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.340233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.340258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.340378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.340404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.340564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.340590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.340709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.340735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.340861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.340885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.341041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.341072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.341196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.341221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.341375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.341400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.341527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.341552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.341679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.341706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.341859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.341884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342044] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.342076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.342228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.342379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.342522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.342673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.342857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.342991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.343017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.343156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.343182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.343326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.343352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.343500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.343525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.343666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.343691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.343816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.343842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.343980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.344006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.344142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.344168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.344316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.344341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.344497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.344523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.344669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.344694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.344857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.344883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.345082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.345115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.345232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.345257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.361 [2024-07-26 12:25:54.345383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.361 [2024-07-26 12:25:54.345408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.361 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.345552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.345578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.345706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.345731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.345857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.345882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.346010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.346171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.346344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.346492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.346640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.362 [2024-07-26 12:25:54.346799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:01.362 [2024-07-26 12:25:54.346940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.346965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.362 [2024-07-26 12:25:54.347096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.362 [2024-07-26 12:25:54.347122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.347247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.347272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.347395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.347420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.347562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.347588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.347714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.347740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.347873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.347903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.348026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.348052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.348220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.348245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.348368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.348393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.348518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.348543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.348679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.348704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.348834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.348859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.349019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.349044] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.349190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.349215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.349355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.349380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.349545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.349569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.349685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.349710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.349829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.349854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.350014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.350039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.350188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.350215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.350331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.350356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.350480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.350506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.350626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:25:01.362 [2024-07-26 12:25:54.350651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x21bf250 with addr=10.0.0.2, port=4420 00:25:01.362 qpair failed and we were unable to recover it. 00:25:01.362 [2024-07-26 12:25:54.350740] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:01.362 [2024-07-26 12:25:54.353220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.362 [2024-07-26 12:25:54.353391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.353419] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.353435] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.353449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.353484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.363 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:25:01.363 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:01.363 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:01.363 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:01.363 12:25:54 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 2978878 00:25:01.363 [2024-07-26 12:25:54.363168] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.363351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.363378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.363393] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.363407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.363435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.373118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.373261] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.373288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.373303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.373316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.373344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.383128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.383268] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.383295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.383310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.383323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.383351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.393087] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.393235] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.393261] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.393276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.393289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.393318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.403084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.403230] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.403256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.403271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.403284] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.403312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.413210] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.413349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.413375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.413397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.413411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.413439] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.423136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.423263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.423289] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.423304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.423317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.423345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.433205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.433356] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.433385] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.433401] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.433414] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.433443] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.443156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.443283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.443309] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.363 [2024-07-26 12:25:54.443324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.363 [2024-07-26 12:25:54.443337] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.363 [2024-07-26 12:25:54.443366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.363 qpair failed and we were unable to recover it. 00:25:01.363 [2024-07-26 12:25:54.453235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.363 [2024-07-26 12:25:54.453371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.363 [2024-07-26 12:25:54.453399] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.453418] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.453432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.453461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.463218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.463352] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.463379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.463394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.463407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.463435] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.473381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.473520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.473547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.473562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.473575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.473603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.483297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.483442] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.483468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.483484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.483497] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.483525] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.493372] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.493506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.493532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.493547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.493561] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.493588] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.503377] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.503507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.503533] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.503556] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.503570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.503598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.513407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.513558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.513583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.513598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.513611] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.513639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.523516] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.523640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.523665] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.523680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.523693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.523721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.533427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.533547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.533572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.533587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.533600] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.533628] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.543472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.543603] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.543628] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.543642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.543655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.543683] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.553645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.553777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.553804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.553819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.553832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.553861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.563534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.563662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.563688] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.563703] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.563717] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.563744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.573552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.364 [2024-07-26 12:25:54.573672] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.364 [2024-07-26 12:25:54.573697] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.364 [2024-07-26 12:25:54.573711] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.364 [2024-07-26 12:25:54.573725] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.364 [2024-07-26 12:25:54.573753] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.364 qpair failed and we were unable to recover it. 00:25:01.364 [2024-07-26 12:25:54.583594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.365 [2024-07-26 12:25:54.583751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.365 [2024-07-26 12:25:54.583777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.365 [2024-07-26 12:25:54.583791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.365 [2024-07-26 12:25:54.583804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.365 [2024-07-26 12:25:54.583832] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.365 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-26 12:25:54.593677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.625 [2024-07-26 12:25:54.593838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.625 [2024-07-26 12:25:54.593863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.625 [2024-07-26 12:25:54.593884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.625 [2024-07-26 12:25:54.593897] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.625 [2024-07-26 12:25:54.593925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-26 12:25:54.603681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.625 [2024-07-26 12:25:54.603813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.625 [2024-07-26 12:25:54.603839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.625 [2024-07-26 12:25:54.603853] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.625 [2024-07-26 12:25:54.603866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.625 [2024-07-26 12:25:54.603895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-26 12:25:54.613688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.625 [2024-07-26 12:25:54.613828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.625 [2024-07-26 12:25:54.613854] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.625 [2024-07-26 12:25:54.613868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.625 [2024-07-26 12:25:54.613882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.625 [2024-07-26 12:25:54.613909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-26 12:25:54.623715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.625 [2024-07-26 12:25:54.623847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.625 [2024-07-26 12:25:54.623872] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.625 [2024-07-26 12:25:54.623887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.625 [2024-07-26 12:25:54.623900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.625 [2024-07-26 12:25:54.623928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-26 12:25:54.633765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.625 [2024-07-26 12:25:54.633921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.625 [2024-07-26 12:25:54.633946] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.625 [2024-07-26 12:25:54.633960] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.625 [2024-07-26 12:25:54.633973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.625 [2024-07-26 12:25:54.634001] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.625 qpair failed and we were unable to recover it. 00:25:01.625 [2024-07-26 12:25:54.643773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.643900] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.643926] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.643941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.643954] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.643981] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.653886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.654009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.654034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.654049] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.654068] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.654097] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.663893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.664040] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.664072] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.664088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.664102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.664130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.673855] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.673982] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.674008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.674023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.674036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.674073] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.683843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.683985] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.684016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.684032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.684045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.684079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.693886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.694020] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.694045] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.694066] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.694082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.694110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.703915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.704047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.704079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.704095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.704108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.704136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.713962] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.714098] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.714124] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.714139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.714152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.714180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.723993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.724132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.724158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.724172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.724185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.724219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.734040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.734175] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.734201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.734216] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.734229] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.734257] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.744072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.744197] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.744222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.744236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.744249] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.744276] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.754175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.754301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.754327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.754342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.754355] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.626 [2024-07-26 12:25:54.754384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.626 qpair failed and we were unable to recover it. 00:25:01.626 [2024-07-26 12:25:54.764097] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.626 [2024-07-26 12:25:54.764219] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.626 [2024-07-26 12:25:54.764245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.626 [2024-07-26 12:25:54.764260] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.626 [2024-07-26 12:25:54.764272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.764300] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.774132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.774253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.774283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.774299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.774313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.774340] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.784164] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.784295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.784320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.784335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.784347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.784375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.794215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.794344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.794370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.794385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.794398] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.794425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.804243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.804371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.804397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.804411] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.804424] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.804452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.814350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.814472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.814498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.814513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.814525] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.814559] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.824355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.824488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.824513] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.824528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.824541] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.824568] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.834360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.834521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.834547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.834562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.834575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.834602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.844472] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.844614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.844639] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.844654] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.844667] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.844695] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.854478] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.854620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.854646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.854660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.854673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.854701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.864456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.864587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.864617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.864633] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.864646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.864675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.627 [2024-07-26 12:25:54.874429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.627 [2024-07-26 12:25:54.874580] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.627 [2024-07-26 12:25:54.874606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.627 [2024-07-26 12:25:54.874621] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.627 [2024-07-26 12:25:54.874634] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.627 [2024-07-26 12:25:54.874661] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.627 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.884520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.884647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.884672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.884687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.884700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.884727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.894496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.894620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.894645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.894660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.894673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.894700] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.904533] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.904659] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.904684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.904699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.904711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.904743] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.914585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.914742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.914768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.914783] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.914809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.914839] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.924668] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.924792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.924820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.924836] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.924853] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.924897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.934710] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.934860] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.934888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.934917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.934931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.934959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.944737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.944866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.944891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.944907] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.944921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.944950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.954660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.954790] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.954819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.954835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.887 [2024-07-26 12:25:54.954847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.887 [2024-07-26 12:25:54.954875] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.887 qpair failed and we were unable to recover it. 00:25:01.887 [2024-07-26 12:25:54.964743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.887 [2024-07-26 12:25:54.964871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.887 [2024-07-26 12:25:54.964898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.887 [2024-07-26 12:25:54.964914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:54.964927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:54.964955] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:54.974750] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:54.974881] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:54.974907] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:54.974922] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:54.974936] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:54.974978] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:54.984868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:54.985013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:54.985038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:54.985053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:54.985073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:54.985103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:54.994791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:54.994918] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:54.994944] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:54.994959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:54.994977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:54.995006] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.004807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:55.004942] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:55.004968] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:55.004983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:55.004998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:55.005026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.014986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:55.015114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:55.015140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:55.015155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:55.015168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:55.015198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.024970] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:55.025103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:55.025129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:55.025144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:55.025157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:55.025187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.034908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:55.035034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:55.035066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:55.035085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:55.035098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:55.035127] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.044935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:55.045101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:55.045126] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:55.045142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:55.045155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:55.045184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.054980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.888 [2024-07-26 12:25:55.055114] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.888 [2024-07-26 12:25:55.055140] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.888 [2024-07-26 12:25:55.055155] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.888 [2024-07-26 12:25:55.055168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.888 [2024-07-26 12:25:55.055197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.888 qpair failed and we were unable to recover it. 00:25:01.888 [2024-07-26 12:25:55.065012] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.065181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.065209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.065228] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.065242] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.065271] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.075054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.075188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.075214] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.075230] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.075244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.075274] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.085077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.085238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.085264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.085278] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.085297] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.085326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.095084] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.095215] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.095240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.095256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.095270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.095298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.105119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.105258] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.105284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.105299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.105313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.105341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.115190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.115347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.115373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.115388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.115402] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.115446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.125189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.125322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.125348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.125363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.125376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.125406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:01.889 [2024-07-26 12:25:55.135206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:01.889 [2024-07-26 12:25:55.135334] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:01.889 [2024-07-26 12:25:55.135360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:01.889 [2024-07-26 12:25:55.135375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:01.889 [2024-07-26 12:25:55.135389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:01.889 [2024-07-26 12:25:55.135417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:01.889 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.145381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.145524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.145550] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.145565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.145579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.145607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.155325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.155461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.155487] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.155503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.155516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.155545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.165359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.165483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.165510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.165525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.165538] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.165567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.175383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.175508] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.175535] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.175555] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.175571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.175599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.185398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.185532] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.185559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.185574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.185589] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.185618] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.195419] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.195563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.195589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.195605] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.195617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.195647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.205424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.205563] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.205589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.205604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.205618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.205646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.215441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.215566] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.215593] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.215608] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.215621] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.215650] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.225535] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.225713] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.225739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.225754] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.225783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.225811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.235509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.235634] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.235660] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.235676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.149 [2024-07-26 12:25:55.235690] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.149 [2024-07-26 12:25:55.235719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.149 qpair failed and we were unable to recover it. 00:25:02.149 [2024-07-26 12:25:55.245540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.149 [2024-07-26 12:25:55.245684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.149 [2024-07-26 12:25:55.245710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.149 [2024-07-26 12:25:55.245725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.245738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.245768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.255530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.255666] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.255692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.255707] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.255721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.255750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.265688] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.265863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.265889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.265914] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.265928] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.265958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.275702] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.275853] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.275879] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.275895] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.275909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.275938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.285716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.285842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.285869] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.285884] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.285898] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.285926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.295701] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.295826] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.295853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.295868] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.295882] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.295910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.305781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.305965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.305990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.306005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.306017] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.306069] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.315740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.315868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.315893] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.315908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.315921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.315950] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.325737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.325888] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.325914] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.325929] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.325942] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.325987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.335817] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.335984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.336010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.336025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.336038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.336074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.345828] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.345955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.345980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.345995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.346008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.346037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.355834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.355962] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.355987] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.356008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.356023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.150 [2024-07-26 12:25:55.356051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.150 qpair failed and we were unable to recover it. 00:25:02.150 [2024-07-26 12:25:55.365846] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.150 [2024-07-26 12:25:55.365970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.150 [2024-07-26 12:25:55.365996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.150 [2024-07-26 12:25:55.366012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.150 [2024-07-26 12:25:55.366025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.151 [2024-07-26 12:25:55.366054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.151 qpair failed and we were unable to recover it. 00:25:02.151 [2024-07-26 12:25:55.375882] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.151 [2024-07-26 12:25:55.376047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.151 [2024-07-26 12:25:55.376079] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.151 [2024-07-26 12:25:55.376095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.151 [2024-07-26 12:25:55.376108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.151 [2024-07-26 12:25:55.376138] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.151 qpair failed and we were unable to recover it. 00:25:02.151 [2024-07-26 12:25:55.385900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.151 [2024-07-26 12:25:55.386027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.151 [2024-07-26 12:25:55.386053] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.151 [2024-07-26 12:25:55.386076] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.151 [2024-07-26 12:25:55.386091] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.151 [2024-07-26 12:25:55.386121] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.151 qpair failed and we were unable to recover it. 00:25:02.151 [2024-07-26 12:25:55.395952] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.151 [2024-07-26 12:25:55.396083] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.151 [2024-07-26 12:25:55.396109] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.151 [2024-07-26 12:25:55.396125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.151 [2024-07-26 12:25:55.396138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.151 [2024-07-26 12:25:55.396166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.151 qpair failed and we were unable to recover it. 00:25:02.410 [2024-07-26 12:25:55.405953] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.410 [2024-07-26 12:25:55.406091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.410 [2024-07-26 12:25:55.406118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.410 [2024-07-26 12:25:55.406134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.410 [2024-07-26 12:25:55.406148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.410 [2024-07-26 12:25:55.406176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.410 qpair failed and we were unable to recover it. 00:25:02.410 [2024-07-26 12:25:55.416099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.410 [2024-07-26 12:25:55.416226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.410 [2024-07-26 12:25:55.416252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.410 [2024-07-26 12:25:55.416268] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.410 [2024-07-26 12:25:55.416282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.410 [2024-07-26 12:25:55.416311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.410 qpair failed and we were unable to recover it. 00:25:02.410 [2024-07-26 12:25:55.426119] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.410 [2024-07-26 12:25:55.426247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.410 [2024-07-26 12:25:55.426273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.410 [2024-07-26 12:25:55.426288] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.410 [2024-07-26 12:25:55.426301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.426331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.436079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.436238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.436264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.436280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.436294] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.436323] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.446112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.446253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.446283] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.446299] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.446313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.446341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.456189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.456319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.456345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.456361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.456375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.456404] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.466144] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.466269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.466295] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.466310] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.466323] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.466352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.476167] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.476290] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.476316] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.476331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.476345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.476373] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.486198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.486330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.486355] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.486370] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.486385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.486413] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.496257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.496385] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.496411] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.496427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.496441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.496469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.506249] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.506372] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.506397] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.506412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.506425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.506454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.516279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.516401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.516426] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.516442] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.516456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.516484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.526328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.526459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.526486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.526501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.526514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.526544] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.536370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.536491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.536521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.536537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.536551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.536580] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.546431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.411 [2024-07-26 12:25:55.546599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.411 [2024-07-26 12:25:55.546625] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.411 [2024-07-26 12:25:55.546655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.411 [2024-07-26 12:25:55.546669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.411 [2024-07-26 12:25:55.546697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.411 qpair failed and we were unable to recover it. 00:25:02.411 [2024-07-26 12:25:55.556388] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.556511] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.556538] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.556553] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.556567] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.556595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.566407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.566538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.566565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.566581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.566596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.566640] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.576522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.576668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.576694] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.576709] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.576724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.576759] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.586523] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.586657] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.586683] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.586698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.586712] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.586755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.596537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.596705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.596731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.596746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.596760] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.596789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.606540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.606676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.606702] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.606717] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.606731] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.606760] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.616611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.616797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.616825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.616857] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.616870] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.616914] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.626600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.626735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.626765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.626781] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.626795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.626823] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.636604] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.636737] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.636764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.636779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.636793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.636821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.646686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.646817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.646844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.646859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.646889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.646917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.412 [2024-07-26 12:25:55.656747] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.412 [2024-07-26 12:25:55.656882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.412 [2024-07-26 12:25:55.656909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.412 [2024-07-26 12:25:55.656925] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.412 [2024-07-26 12:25:55.656938] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.412 [2024-07-26 12:25:55.656967] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.412 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.666784] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.666926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.666952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.666968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.666982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.667017] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.676733] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.676880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.676905] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.676921] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.676935] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.676979] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.686789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.686952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.686978] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.686993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.687007] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.687050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.696782] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.696958] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.696985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.697000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.697014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.697050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.706836] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.706966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.706991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.707006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.707019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.707052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.716902] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.717065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.717096] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.717112] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.717125] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.717154] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.726923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.727091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.727119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.727135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.727152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.727182] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.736936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.737073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.737100] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.737115] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.737129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.737157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.746936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.747110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.747137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.747152] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.747166] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.747194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.757048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.757185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.675 [2024-07-26 12:25:55.757212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.675 [2024-07-26 12:25:55.757227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.675 [2024-07-26 12:25:55.757246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.675 [2024-07-26 12:25:55.757275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.675 qpair failed and we were unable to recover it. 00:25:02.675 [2024-07-26 12:25:55.766985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.675 [2024-07-26 12:25:55.767125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.767151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.767167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.767181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.767209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.777034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.777191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.777217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.777232] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.777246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.777275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.787043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.787194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.787220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.787236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.787251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.787279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.797102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.797232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.797259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.797274] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.797288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.797316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.807089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.807231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.807258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.807273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.807286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.807315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.817114] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.817238] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.817265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.817280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.817293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.817322] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.827160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.827296] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.827323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.827339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.827352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.827381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.837191] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.837322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.837348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.837364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.837378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.837406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.847238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.847363] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.847389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.847404] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.847423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.847452] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.857254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.857384] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.857410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.857426] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.857439] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.857467] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.867300] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.867490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.867531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.867547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.867560] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.867603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.877293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.877427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.877453] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.877469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.877482] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.676 [2024-07-26 12:25:55.877510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.676 qpair failed and we were unable to recover it. 00:25:02.676 [2024-07-26 12:25:55.887318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.676 [2024-07-26 12:25:55.887469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.676 [2024-07-26 12:25:55.887495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.676 [2024-07-26 12:25:55.887511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.676 [2024-07-26 12:25:55.887524] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.677 [2024-07-26 12:25:55.887552] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-26 12:25:55.897373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.677 [2024-07-26 12:25:55.897500] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.677 [2024-07-26 12:25:55.897527] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.677 [2024-07-26 12:25:55.897543] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.677 [2024-07-26 12:25:55.897556] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.677 [2024-07-26 12:25:55.897585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-26 12:25:55.907379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.677 [2024-07-26 12:25:55.907520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.677 [2024-07-26 12:25:55.907547] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.677 [2024-07-26 12:25:55.907562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.677 [2024-07-26 12:25:55.907575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.677 [2024-07-26 12:25:55.907602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.677 [2024-07-26 12:25:55.917413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.677 [2024-07-26 12:25:55.917541] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.677 [2024-07-26 12:25:55.917568] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.677 [2024-07-26 12:25:55.917583] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.677 [2024-07-26 12:25:55.917596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.677 [2024-07-26 12:25:55.917624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.677 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.927485] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.927614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.927640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.927656] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.927669] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.927697] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.937458] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.937584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.937611] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.937627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.937646] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.937675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.947562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.947726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.947753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.947769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.947797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.947827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.957607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.957768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.957793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.957808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.957821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.957864] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.967652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.967819] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.967847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.967863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.967877] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.967904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.977612] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.977777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.977804] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.977821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.977849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.977878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.987652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.987784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.987811] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.987827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.987840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.987869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:55.997732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:55.997863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:55.997889] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:55.997905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:55.997919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:55.997948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:56.007762] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:56.007889] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:56.007916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:56.007933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:56.007946] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:56.007976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:56.017706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:56.017850] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:56.017877] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:56.017892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:56.017905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:56.017949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:56.027737] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:56.027882] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:56.027908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:56.027933] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:56.027947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:56.027976] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.939 [2024-07-26 12:25:56.037842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.939 [2024-07-26 12:25:56.038007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.939 [2024-07-26 12:25:56.038033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.939 [2024-07-26 12:25:56.038071] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.939 [2024-07-26 12:25:56.038086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.939 [2024-07-26 12:25:56.038116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.939 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.047798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.047925] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.047952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.047968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.047982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.048010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.057798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.057926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.057953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.057968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.057981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.058010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.067886] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.068031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.068057] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.068080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.068094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.068122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.077862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.078000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.078026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.078042] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.078055] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.078092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.087982] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.088115] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.088141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.088156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.088169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.088198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.097958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.098130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.098158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.098174] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.098188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.098218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.107966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.108101] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.108125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.108140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.108154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.108183] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.117972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.118097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.118121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.118142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.118155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.118184] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.128048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.128226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.128253] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.128269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.128282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.128312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.138044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.138218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.138246] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.138261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.138275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.138303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.148092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.148227] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.148254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.148270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.148283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.148312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.158120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.940 [2024-07-26 12:25:56.158292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.940 [2024-07-26 12:25:56.158319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.940 [2024-07-26 12:25:56.158335] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.940 [2024-07-26 12:25:56.158348] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.940 [2024-07-26 12:25:56.158391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.940 qpair failed and we were unable to recover it. 00:25:02.940 [2024-07-26 12:25:56.168150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.941 [2024-07-26 12:25:56.168312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.941 [2024-07-26 12:25:56.168338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.941 [2024-07-26 12:25:56.168354] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.941 [2024-07-26 12:25:56.168367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.941 [2024-07-26 12:25:56.168411] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.941 qpair failed and we were unable to recover it. 00:25:02.941 [2024-07-26 12:25:56.178211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.941 [2024-07-26 12:25:56.178343] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.941 [2024-07-26 12:25:56.178369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.941 [2024-07-26 12:25:56.178385] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.941 [2024-07-26 12:25:56.178399] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.941 [2024-07-26 12:25:56.178427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.941 qpair failed and we were unable to recover it. 00:25:02.941 [2024-07-26 12:25:56.188187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:02.941 [2024-07-26 12:25:56.188322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:02.941 [2024-07-26 12:25:56.188348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:02.941 [2024-07-26 12:25:56.188363] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:02.941 [2024-07-26 12:25:56.188376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:02.941 [2024-07-26 12:25:56.188405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:02.941 qpair failed and we were unable to recover it. 00:25:03.202 [2024-07-26 12:25:56.198216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.202 [2024-07-26 12:25:56.198344] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.202 [2024-07-26 12:25:56.198371] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.202 [2024-07-26 12:25:56.198387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.202 [2024-07-26 12:25:56.198401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.202 [2024-07-26 12:25:56.198430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.202 qpair failed and we were unable to recover it. 00:25:03.202 [2024-07-26 12:25:56.208250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.202 [2024-07-26 12:25:56.208378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.202 [2024-07-26 12:25:56.208405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.202 [2024-07-26 12:25:56.208427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.202 [2024-07-26 12:25:56.208441] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.202 [2024-07-26 12:25:56.208470] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.202 qpair failed and we were unable to recover it. 00:25:03.202 [2024-07-26 12:25:56.218255] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.202 [2024-07-26 12:25:56.218377] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.202 [2024-07-26 12:25:56.218405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.218421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.218435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.218464] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.228334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.228466] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.228492] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.228508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.228522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.228550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.238335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.238472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.238499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.238515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.238529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.238557] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.248407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.248540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.248567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.248582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.248596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.248625] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.258393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.258535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.258561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.258577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.258590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.258633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.268440] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.268576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.268603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.268618] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.268631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.268660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.278429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.278564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.278590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.278604] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.278618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.278646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.288457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.288576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.288602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.288617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.288631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.288659] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.298494] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.298614] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.298645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.298662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.298675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.298706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.308519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.308651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.308676] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.308691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.308704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.308733] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.318563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.318696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.318721] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.318737] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.318750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.318778] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.328602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.328770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.328796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.328827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.328840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.203 [2024-07-26 12:25:56.328868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.203 qpair failed and we were unable to recover it. 00:25:03.203 [2024-07-26 12:25:56.338621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.203 [2024-07-26 12:25:56.338749] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.203 [2024-07-26 12:25:56.338775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.203 [2024-07-26 12:25:56.338791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.203 [2024-07-26 12:25:56.338804] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.338838] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.348675] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.348808] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.348833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.348847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.348860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.348889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.358671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.358805] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.358831] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.358847] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.358860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.358888] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.368699] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.368875] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.368902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.368918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.368931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.368961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.378722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.378849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.378876] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.378891] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.378905] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.378933] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.388770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.388903] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.388934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.388950] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.388963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.388991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.398837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.398971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.398997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.399012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.399025] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.399054] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.408802] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.408928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.408954] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.408969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.408983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.409011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.418833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.418959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.418985] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.419000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.419013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.419042] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.428871] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.429010] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.429036] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.429051] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.429072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.429107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.438903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.439031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.439063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.439080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.439093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.439122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.204 [2024-07-26 12:25:56.448909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.204 [2024-07-26 12:25:56.449044] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.204 [2024-07-26 12:25:56.449089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.204 [2024-07-26 12:25:56.449106] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.204 [2024-07-26 12:25:56.449119] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.204 [2024-07-26 12:25:56.449148] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.204 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.458922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.459110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.459137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.459153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.459165] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.459194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.469089] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.469231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.469258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.469275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.469288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.469317] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.478986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.479121] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.479153] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.479169] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.479193] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.479222] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.489050] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.489191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.489220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.489237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.489250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.489280] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.499137] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.499266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.499291] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.499307] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.499321] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.499350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.509121] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.509257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.509284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.509300] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.509313] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.509343] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.519107] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.519241] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.519267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.519283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.519302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.519332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.529217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.529381] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.529408] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.529423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.465 [2024-07-26 12:25:56.529436] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.465 [2024-07-26 12:25:56.529463] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.465 qpair failed and we were unable to recover it. 00:25:03.465 [2024-07-26 12:25:56.539179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.465 [2024-07-26 12:25:56.539301] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.465 [2024-07-26 12:25:56.539327] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.465 [2024-07-26 12:25:56.539343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.539357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.539385] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.549235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.549409] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.549436] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.549466] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.549479] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.549508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.559262] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.559397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.559423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.559439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.559452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.559496] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.569366] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.569547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.569574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.569590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.569603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.569632] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.579325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.579489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.579516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.579532] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.579545] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.579573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.589408] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.589562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.589603] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.589619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.589632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.589676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.599365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.599499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.599525] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.599541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.599554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.599598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.609453] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.609576] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.609602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.609617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.609636] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.609666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.619393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.619518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.619542] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.619558] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.619570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.619600] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.629459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.629595] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.629623] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.629638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.629652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.629680] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.639457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.639582] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.639609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.639625] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.639638] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.639667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.649463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.649586] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.649612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.649628] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.649641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.649670] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.659553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.466 [2024-07-26 12:25:56.659684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.466 [2024-07-26 12:25:56.659711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.466 [2024-07-26 12:25:56.659727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.466 [2024-07-26 12:25:56.659740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.466 [2024-07-26 12:25:56.659768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.466 qpair failed and we were unable to recover it. 00:25:03.466 [2024-07-26 12:25:56.669593] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.467 [2024-07-26 12:25:56.669732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.467 [2024-07-26 12:25:56.669758] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.467 [2024-07-26 12:25:56.669774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.467 [2024-07-26 12:25:56.669787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.467 [2024-07-26 12:25:56.669831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.467 qpair failed and we were unable to recover it. 00:25:03.467 [2024-07-26 12:25:56.679565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.467 [2024-07-26 12:25:56.679709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.467 [2024-07-26 12:25:56.679735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.467 [2024-07-26 12:25:56.679750] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.467 [2024-07-26 12:25:56.679763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.467 [2024-07-26 12:25:56.679792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.467 qpair failed and we were unable to recover it. 00:25:03.467 [2024-07-26 12:25:56.689596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.467 [2024-07-26 12:25:56.689727] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.467 [2024-07-26 12:25:56.689753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.467 [2024-07-26 12:25:56.689769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.467 [2024-07-26 12:25:56.689782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.467 [2024-07-26 12:25:56.689811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.467 qpair failed and we were unable to recover it. 00:25:03.467 [2024-07-26 12:25:56.699646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.467 [2024-07-26 12:25:56.699779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.467 [2024-07-26 12:25:56.699806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.467 [2024-07-26 12:25:56.699822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.467 [2024-07-26 12:25:56.699841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.467 [2024-07-26 12:25:56.699869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.467 qpair failed and we were unable to recover it. 00:25:03.467 [2024-07-26 12:25:56.709678] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.467 [2024-07-26 12:25:56.709828] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.467 [2024-07-26 12:25:56.709857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.467 [2024-07-26 12:25:56.709873] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.467 [2024-07-26 12:25:56.709901] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.467 [2024-07-26 12:25:56.709931] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.467 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.719672] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.726 [2024-07-26 12:25:56.719797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.726 [2024-07-26 12:25:56.719825] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.726 [2024-07-26 12:25:56.719840] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.726 [2024-07-26 12:25:56.719854] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.726 [2024-07-26 12:25:56.719884] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.729717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.726 [2024-07-26 12:25:56.729838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.726 [2024-07-26 12:25:56.729864] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.726 [2024-07-26 12:25:56.729880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.726 [2024-07-26 12:25:56.729894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.726 [2024-07-26 12:25:56.729923] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.739791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.726 [2024-07-26 12:25:56.739952] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.726 [2024-07-26 12:25:56.739980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.726 [2024-07-26 12:25:56.739997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.726 [2024-07-26 12:25:56.740011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.726 [2024-07-26 12:25:56.740041] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.749773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.726 [2024-07-26 12:25:56.749905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.726 [2024-07-26 12:25:56.749932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.726 [2024-07-26 12:25:56.749947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.726 [2024-07-26 12:25:56.749960] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.726 [2024-07-26 12:25:56.749989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.759781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.726 [2024-07-26 12:25:56.759926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.726 [2024-07-26 12:25:56.759953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.726 [2024-07-26 12:25:56.759968] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.726 [2024-07-26 12:25:56.759982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.726 [2024-07-26 12:25:56.760010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.769823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.726 [2024-07-26 12:25:56.769944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.726 [2024-07-26 12:25:56.769971] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.726 [2024-07-26 12:25:56.769986] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.726 [2024-07-26 12:25:56.770000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.726 [2024-07-26 12:25:56.770029] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.726 qpair failed and we were unable to recover it. 00:25:03.726 [2024-07-26 12:25:56.779876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.779997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.780024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.780040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.780054] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.780101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.789884] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.790056] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.790089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.790122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.790138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.790167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.799956] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.800113] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.800141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.800156] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.800169] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.800199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.809930] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.810111] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.810138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.810153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.810168] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.810197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.819968] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.820112] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.820138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.820153] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.820167] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.820195] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.830104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.830254] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.830279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.830294] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.830308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.830336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.840013] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.840148] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.840176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.840191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.840205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.840234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.850077] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.850202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.850227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.850242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.850255] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.850284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.860178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.860346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.860388] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.860412] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.860425] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.860455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.870151] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.870326] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.870359] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.870375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.870389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.870417] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.880178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.880346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.880372] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.880408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.880423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.880451] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.890174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.890305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.890331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.890346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.890360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.727 [2024-07-26 12:25:56.890389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.727 qpair failed and we were unable to recover it. 00:25:03.727 [2024-07-26 12:25:56.900215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.727 [2024-07-26 12:25:56.900348] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.727 [2024-07-26 12:25:56.900375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.727 [2024-07-26 12:25:56.900395] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.727 [2024-07-26 12:25:56.900409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.900454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.910237] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.910368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.910394] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.910409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.910421] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.910450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.920279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.920416] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.920442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.920457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.920471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.920499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.930326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.930456] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.930483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.930503] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.930517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.930547] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.940307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.940431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.940458] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.940473] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.940488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.940517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.950435] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.950604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.950630] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.950646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.950660] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.950688] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.960389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.960536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.960561] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.960576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.960588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.960630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.728 [2024-07-26 12:25:56.970411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.728 [2024-07-26 12:25:56.970538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.728 [2024-07-26 12:25:56.970563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.728 [2024-07-26 12:25:56.970584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.728 [2024-07-26 12:25:56.970599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.728 [2024-07-26 12:25:56.970627] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.728 qpair failed and we were unable to recover it. 00:25:03.987 [2024-07-26 12:25:56.980438] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.987 [2024-07-26 12:25:56.980606] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.987 [2024-07-26 12:25:56.980632] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.987 [2024-07-26 12:25:56.980647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.987 [2024-07-26 12:25:56.980661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.987 [2024-07-26 12:25:56.980689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.987 qpair failed and we were unable to recover it. 00:25:03.987 [2024-07-26 12:25:56.990537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.987 [2024-07-26 12:25:56.990707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.987 [2024-07-26 12:25:56.990733] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.987 [2024-07-26 12:25:56.990749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.987 [2024-07-26 12:25:56.990762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.987 [2024-07-26 12:25:56.990806] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.987 qpair failed and we were unable to recover it. 00:25:03.987 [2024-07-26 12:25:57.000492] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.987 [2024-07-26 12:25:57.000621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.000646] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.000661] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.000675] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.000703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.010520] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.010652] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.010678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.010692] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.010706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.010735] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.020649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.020814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.020839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.020854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.020867] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.020909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.030606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.030751] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.030777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.030792] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.030820] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.030848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.040623] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.040760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.040788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.040808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.040822] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.040866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.050607] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.050733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.050759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.050775] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.050789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.050818] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.060656] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.060784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.060814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.060831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.060845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.060874] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.070719] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.070868] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.070896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.070915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.070929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.070974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.080724] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.080855] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.080882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.080897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.080911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.080940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.090763] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.090927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.090953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.090969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.090983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.091011] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.100864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.100990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.101016] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.101031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.101045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.101087] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.110799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.110931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.110957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.110972] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.110986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.988 [2024-07-26 12:25:57.111015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.988 qpair failed and we were unable to recover it. 00:25:03.988 [2024-07-26 12:25:57.120839] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.988 [2024-07-26 12:25:57.120966] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.988 [2024-07-26 12:25:57.120992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.988 [2024-07-26 12:25:57.121006] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.988 [2024-07-26 12:25:57.121020] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.121048] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.130851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.130972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.130997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.131012] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.131026] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.131055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.140891] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.141021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.141047] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.141069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.141083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.141113] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.151014] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.151196] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.151227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.151243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.151256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.151285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.160936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.161081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.161107] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.161122] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.161136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.161165] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.170999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.171133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.171160] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.171175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.171189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.171217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.180998] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.181141] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.181168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.181183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.181197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.181226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.191125] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.191264] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.191290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.191306] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.191319] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.191369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.201148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.201275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.201301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.201316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.201330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.201358] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.211219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.211412] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.211437] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.211453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.211466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.211508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.221206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.221342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.221368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.221383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.221397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.221425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:03.989 [2024-07-26 12:25:57.231174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:03.989 [2024-07-26 12:25:57.231342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:03.989 [2024-07-26 12:25:57.231368] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:03.989 [2024-07-26 12:25:57.231383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:03.989 [2024-07-26 12:25:57.231397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:03.989 [2024-07-26 12:25:57.231425] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:03.989 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.241176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.241330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.241360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.241376] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.241390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.241418] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.251238] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.251378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.251405] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.251421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.251435] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.251478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.261233] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.261370] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.261396] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.261413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.261427] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.261455] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.271291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.271429] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.271455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.271470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.271484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.271528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.281445] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.281637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.281662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.281677] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.281691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.281739] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.291311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.291437] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.291463] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.291477] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.291491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.291519] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.301361] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.301484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.301510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.301525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.301539] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.301567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.311415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.311555] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.311583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.311603] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.311617] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.311647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.321385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.321520] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.321546] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.321562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.321575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.321604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.249 [2024-07-26 12:25:57.331456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.249 [2024-07-26 12:25:57.331587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.249 [2024-07-26 12:25:57.331619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.249 [2024-07-26 12:25:57.331635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.249 [2024-07-26 12:25:57.331649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.249 [2024-07-26 12:25:57.331678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.249 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.341448] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.341618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.341644] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.341660] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.341673] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.341702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.351553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.351690] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.351716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.351732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.351745] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.351789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.361509] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.361640] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.361667] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.361682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.361696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.361724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.371628] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.371770] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.371795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.371810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.371830] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.371858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.381653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.381779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.381805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.381820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.381834] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.381862] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.391654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.391817] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.391843] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.391858] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.391887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.391915] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.401625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.401758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.401784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.401799] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.401813] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.401841] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.411650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.411778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.411803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.411819] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.411833] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.411861] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.421717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.421854] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.421881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.421896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.421911] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.421954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.431739] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.431872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.431897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.431912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.431927] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.431954] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.441833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.441964] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.441990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.442005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.442019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.442055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.451764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.451896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.451922] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.451949] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.250 [2024-07-26 12:25:57.451963] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.250 [2024-07-26 12:25:57.451991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.250 qpair failed and we were unable to recover it. 00:25:04.250 [2024-07-26 12:25:57.461798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.250 [2024-07-26 12:25:57.461933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.250 [2024-07-26 12:25:57.461960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.250 [2024-07-26 12:25:57.461976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.251 [2024-07-26 12:25:57.461995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.251 [2024-07-26 12:25:57.462039] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-26 12:25:57.471847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.251 [2024-07-26 12:25:57.472005] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.251 [2024-07-26 12:25:57.472033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.251 [2024-07-26 12:25:57.472048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.251 [2024-07-26 12:25:57.472083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.251 [2024-07-26 12:25:57.472114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-26 12:25:57.481985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.251 [2024-07-26 12:25:57.482125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.251 [2024-07-26 12:25:57.482152] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.251 [2024-07-26 12:25:57.482167] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.251 [2024-07-26 12:25:57.482181] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.251 [2024-07-26 12:25:57.482210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.251 [2024-07-26 12:25:57.491888] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.251 [2024-07-26 12:25:57.492031] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.251 [2024-07-26 12:25:57.492071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.251 [2024-07-26 12:25:57.492089] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.251 [2024-07-26 12:25:57.492103] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.251 [2024-07-26 12:25:57.492131] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.251 qpair failed and we were unable to recover it. 00:25:04.512 [2024-07-26 12:25:57.502005] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.512 [2024-07-26 12:25:57.502144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.512 [2024-07-26 12:25:57.502170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.512 [2024-07-26 12:25:57.502186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.512 [2024-07-26 12:25:57.502200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.512 [2024-07-26 12:25:57.502228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.512 qpair failed and we were unable to recover it. 00:25:04.512 [2024-07-26 12:25:57.511958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.512 [2024-07-26 12:25:57.512142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.512 [2024-07-26 12:25:57.512168] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.512 [2024-07-26 12:25:57.512184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.512 [2024-07-26 12:25:57.512198] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.512 [2024-07-26 12:25:57.512227] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.512 qpair failed and we were unable to recover it. 00:25:04.512 [2024-07-26 12:25:57.521991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.512 [2024-07-26 12:25:57.522146] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.512 [2024-07-26 12:25:57.522172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.512 [2024-07-26 12:25:57.522187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.512 [2024-07-26 12:25:57.522200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.512 [2024-07-26 12:25:57.522228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.512 qpair failed and we were unable to recover it. 00:25:04.512 [2024-07-26 12:25:57.532002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.512 [2024-07-26 12:25:57.532130] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.512 [2024-07-26 12:25:57.532156] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.512 [2024-07-26 12:25:57.532171] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.512 [2024-07-26 12:25:57.532184] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.512 [2024-07-26 12:25:57.532212] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.512 qpair failed and we were unable to recover it. 00:25:04.512 [2024-07-26 12:25:57.542172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.512 [2024-07-26 12:25:57.542346] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.512 [2024-07-26 12:25:57.542374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.512 [2024-07-26 12:25:57.542388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.512 [2024-07-26 12:25:57.542401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.512 [2024-07-26 12:25:57.542431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.512 qpair failed and we were unable to recover it. 00:25:04.512 [2024-07-26 12:25:57.552085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.512 [2024-07-26 12:25:57.552225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.512 [2024-07-26 12:25:57.552250] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.512 [2024-07-26 12:25:57.552273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.512 [2024-07-26 12:25:57.552287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.512 [2024-07-26 12:25:57.552315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.512 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.562081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.562203] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.562228] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.562243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.562256] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.562283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.572120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.572265] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.572290] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.572304] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.572317] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.572345] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.582154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.582293] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.582319] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.582334] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.582347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.582374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.592179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.592319] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.592344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.592358] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.592370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.592397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.602215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.602358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.602383] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.602397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.602411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.602438] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.612228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.612353] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.612378] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.612392] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.612405] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.612432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.622271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.622424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.622448] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.622462] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.622475] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.622502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.632395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.632540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.632567] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.632582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.632595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.632624] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.642395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.642518] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.642544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.642565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.642581] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.642610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.652365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.652501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.652526] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.652541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.652553] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.652581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.662404] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.662529] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.662555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.662570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.662583] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.662611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.672426] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.672592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.513 [2024-07-26 12:25:57.672618] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.513 [2024-07-26 12:25:57.672632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.513 [2024-07-26 12:25:57.672645] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.513 [2024-07-26 12:25:57.672674] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.513 qpair failed and we were unable to recover it. 00:25:04.513 [2024-07-26 12:25:57.682405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.513 [2024-07-26 12:25:57.682531] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.682557] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.682572] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.682584] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.682612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.692436] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.692587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.692612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.692627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.692640] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.692668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.702517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.702647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.702673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.702687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.702700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.702728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.712532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.712661] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.712686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.712700] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.712713] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.712741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.722556] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.722684] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.722710] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.722724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.722737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.722765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.732574] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.732695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.732720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.732741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.732756] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.732784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.742565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.742736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.742762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.742776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.742789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.742817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.752654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.752793] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.752818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.752833] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.752845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.752873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.514 [2024-07-26 12:25:57.762620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.514 [2024-07-26 12:25:57.762743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.514 [2024-07-26 12:25:57.762768] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.514 [2024-07-26 12:25:57.762782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.514 [2024-07-26 12:25:57.762795] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.514 [2024-07-26 12:25:57.762822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.514 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.772664] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.772787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.772812] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.772827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.772840] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.772867] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.782683] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.782823] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.782849] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.782863] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.782876] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.782903] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.792722] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.792866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.792891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.792906] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.792919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.792946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.802753] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.802876] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.802902] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.802917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.802929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.802957] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.812813] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.812967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.812992] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.813007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.813019] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.813047] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.822815] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.822950] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.822980] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.822995] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.823008] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.823036] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.832964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.833095] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.833121] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.833135] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.833148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.833176] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.842938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.776 [2024-07-26 12:25:57.843081] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.776 [2024-07-26 12:25:57.843106] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.776 [2024-07-26 12:25:57.843121] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.776 [2024-07-26 12:25:57.843134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.776 [2024-07-26 12:25:57.843161] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.776 qpair failed and we were unable to recover it. 00:25:04.776 [2024-07-26 12:25:57.852898] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.853026] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.853051] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.853075] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.853089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.853116] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.862905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.863033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.863069] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.863087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.863100] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.863128] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.872976] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.873156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.873184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.873200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.873213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.873244] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.882987] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.883123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.883150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.883165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.883178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.883207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.893043] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.893173] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.893199] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.893214] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.893227] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.893255] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.903072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.903209] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.903235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.903249] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.903262] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.903290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.913108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.913242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.913271] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.913287] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.913299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.913326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.923104] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.923255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.923280] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.923295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.923308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.923335] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.933108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.933249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.933274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.933289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.933302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.933329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.943134] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.943275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.943300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.943315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.943328] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.943355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.953205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.953350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.953375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.953390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.953403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.953436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.963188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.963318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.963341] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.963355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.777 [2024-07-26 12:25:57.963367] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.777 [2024-07-26 12:25:57.963394] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.777 qpair failed and we were unable to recover it. 00:25:04.777 [2024-07-26 12:25:57.973254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.777 [2024-07-26 12:25:57.973382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.777 [2024-07-26 12:25:57.973407] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.777 [2024-07-26 12:25:57.973421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.778 [2024-07-26 12:25:57.973434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.778 [2024-07-26 12:25:57.973462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.778 qpair failed and we were unable to recover it. 00:25:04.778 [2024-07-26 12:25:57.983269] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.778 [2024-07-26 12:25:57.983399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.778 [2024-07-26 12:25:57.983424] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.778 [2024-07-26 12:25:57.983439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.778 [2024-07-26 12:25:57.983452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.778 [2024-07-26 12:25:57.983480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.778 qpair failed and we were unable to recover it. 00:25:04.778 [2024-07-26 12:25:57.993297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.778 [2024-07-26 12:25:57.993436] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.778 [2024-07-26 12:25:57.993462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.778 [2024-07-26 12:25:57.993476] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.778 [2024-07-26 12:25:57.993489] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.778 [2024-07-26 12:25:57.993516] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.778 qpair failed and we were unable to recover it. 00:25:04.778 [2024-07-26 12:25:58.003351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.778 [2024-07-26 12:25:58.003479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.778 [2024-07-26 12:25:58.003509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.778 [2024-07-26 12:25:58.003525] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.778 [2024-07-26 12:25:58.003537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.778 [2024-07-26 12:25:58.003565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.778 qpair failed and we were unable to recover it. 00:25:04.778 [2024-07-26 12:25:58.013352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.778 [2024-07-26 12:25:58.013482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.778 [2024-07-26 12:25:58.013507] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.778 [2024-07-26 12:25:58.013521] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.778 [2024-07-26 12:25:58.013534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.778 [2024-07-26 12:25:58.013563] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.778 qpair failed and we were unable to recover it. 00:25:04.778 [2024-07-26 12:25:58.023413] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:04.778 [2024-07-26 12:25:58.023567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:04.778 [2024-07-26 12:25:58.023592] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:04.778 [2024-07-26 12:25:58.023607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:04.778 [2024-07-26 12:25:58.023619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:04.778 [2024-07-26 12:25:58.023647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:04.778 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.033393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.033575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.033600] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.033615] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.033628] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.033656] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.043420] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.043589] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.043614] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.043629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.043642] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.043676] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.053482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.053609] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.053634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.053648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.053661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.053689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.063519] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.063643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.063668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.063683] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.063696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.063723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.073542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.073686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.073711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.073725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.073738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.073766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.083566] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.083692] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.083718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.083733] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.083746] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.083775] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.093562] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.093695] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.093725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.093740] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.093753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.093780] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.103586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.103734] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.103759] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.103773] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.103786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.103814] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.040 qpair failed and we were unable to recover it. 00:25:05.040 [2024-07-26 12:25:58.113665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.040 [2024-07-26 12:25:58.113811] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.040 [2024-07-26 12:25:58.113836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.040 [2024-07-26 12:25:58.113850] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.040 [2024-07-26 12:25:58.113863] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.040 [2024-07-26 12:25:58.113890] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.123677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.123840] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.123866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.123880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.123893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.123920] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.133679] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.133822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.133847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.133862] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.133880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.133908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.143740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.143862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.143890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.143909] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.143922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.143951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.153764] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.153929] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.153955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.153970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.153982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.154010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.163887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.164033] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.164063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.164080] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.164094] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.164122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.173816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.173984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.174009] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.174023] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.174036] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.174071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.183893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.184018] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.184044] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.184065] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.184081] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.184109] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.193850] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.193979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.194004] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.194018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.194031] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.194068] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.203876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.204007] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.204033] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.204048] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.204067] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.204096] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.213896] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.214027] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.214052] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.214074] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.214088] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.214115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.223925] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.224050] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.224080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.224096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.224114] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.224142] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.234044] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.234181] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.234206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.041 [2024-07-26 12:25:58.234221] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.041 [2024-07-26 12:25:58.234233] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.041 [2024-07-26 12:25:58.234261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.041 qpair failed and we were unable to recover it. 00:25:05.041 [2024-07-26 12:25:58.243995] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.041 [2024-07-26 12:25:58.244153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.041 [2024-07-26 12:25:58.244180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.042 [2024-07-26 12:25:58.244194] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.042 [2024-07-26 12:25:58.244207] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.042 [2024-07-26 12:25:58.244235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.042 qpair failed and we were unable to recover it. 00:25:05.042 [2024-07-26 12:25:58.254079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.042 [2024-07-26 12:25:58.254207] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.042 [2024-07-26 12:25:58.254232] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.042 [2024-07-26 12:25:58.254246] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.042 [2024-07-26 12:25:58.254259] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.042 [2024-07-26 12:25:58.254287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.042 qpair failed and we were unable to recover it. 00:25:05.042 [2024-07-26 12:25:58.264055] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.042 [2024-07-26 12:25:58.264184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.042 [2024-07-26 12:25:58.264209] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.042 [2024-07-26 12:25:58.264224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.042 [2024-07-26 12:25:58.264236] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.042 [2024-07-26 12:25:58.264264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.042 qpair failed and we were unable to recover it. 00:25:05.042 [2024-07-26 12:25:58.274152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.042 [2024-07-26 12:25:58.274308] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.042 [2024-07-26 12:25:58.274333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.042 [2024-07-26 12:25:58.274347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.042 [2024-07-26 12:25:58.274360] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.042 [2024-07-26 12:25:58.274388] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.042 qpair failed and we were unable to recover it. 00:25:05.042 [2024-07-26 12:25:58.284109] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.042 [2024-07-26 12:25:58.284269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.042 [2024-07-26 12:25:58.284297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.042 [2024-07-26 12:25:58.284312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.042 [2024-07-26 12:25:58.284326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.042 [2024-07-26 12:25:58.284354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.042 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.294136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.294272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.294299] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.294313] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.294327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.294355] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.304153] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.304275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.304300] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.304315] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.304327] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.304356] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.314188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.314318] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.314343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.314357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.314380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.314408] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.324250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.324378] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.324404] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.324419] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.324432] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.324462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.334336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.334499] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.334524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.334539] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.334551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.334581] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.344302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.344431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.344456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.344471] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.344484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.344511] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.354332] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.354459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.354484] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.354499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.354512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.354541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.364383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.364507] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.364532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.364546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.364559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.364586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.374359] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.374484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.374509] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.374524] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.374537] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.374565] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.384490] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.384618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.384643] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.384658] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.384671] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.384698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.394433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.303 [2024-07-26 12:25:58.394597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.303 [2024-07-26 12:25:58.394621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.303 [2024-07-26 12:25:58.394635] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.303 [2024-07-26 12:25:58.394648] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.303 [2024-07-26 12:25:58.394675] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.303 qpair failed and we were unable to recover it. 00:25:05.303 [2024-07-26 12:25:58.404579] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.404705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.404730] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.404751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.404764] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.404792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.414591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.414730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.414755] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.414770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.414782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.414810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.424575] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.424709] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.424734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.424749] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.424762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.424789] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.434601] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.434748] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.434773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.434788] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.434800] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.434829] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.444565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.444691] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.444716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.444731] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.444744] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.444771] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.454642] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.454769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.454793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.454808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.454821] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.454849] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.464654] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.464820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.464844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.464859] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.464872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.464899] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.474746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.474871] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.474896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.474910] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.474923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.474951] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.484715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.484856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.484882] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.484896] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.484909] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.484937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.494718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.494843] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.494867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.494887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.494900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.494928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.504725] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.504847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.504873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.504888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.504900] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.504928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.514845] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.514970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.514995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.515009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.304 [2024-07-26 12:25:58.515022] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.304 [2024-07-26 12:25:58.515050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.304 qpair failed and we were unable to recover it. 00:25:05.304 [2024-07-26 12:25:58.524796] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.304 [2024-07-26 12:25:58.524924] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.304 [2024-07-26 12:25:58.524950] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.304 [2024-07-26 12:25:58.524965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.305 [2024-07-26 12:25:58.524978] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.305 [2024-07-26 12:25:58.525005] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-07-26 12:25:58.534816] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.305 [2024-07-26 12:25:58.534944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.305 [2024-07-26 12:25:58.534969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.305 [2024-07-26 12:25:58.534983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.305 [2024-07-26 12:25:58.534995] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.305 [2024-07-26 12:25:58.535023] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.305 [2024-07-26 12:25:58.544829] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.305 [2024-07-26 12:25:58.544993] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.305 [2024-07-26 12:25:58.545018] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.305 [2024-07-26 12:25:58.545032] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.305 [2024-07-26 12:25:58.545045] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.305 [2024-07-26 12:25:58.545079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.305 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.554870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.555039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.555071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.555088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.555102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.555130] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.564893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.565016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.565041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.565056] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.565082] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.565110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.574944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.575078] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.575103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.575118] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.575131] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.575159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.584955] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.585082] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.585112] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.585128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.585141] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.585168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.595020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.595160] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.595185] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.595200] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.595213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.595242] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.605010] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.605136] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.605161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.605175] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.605188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.605216] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.615048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.615225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.615249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.615264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.615277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.615304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.625085] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.625232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.625258] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.625272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.625285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.625313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.635156] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.635322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.635348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.635362] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.635375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.635403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.645189] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.645315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.645342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.645361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.645375] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.645403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.655198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.655362] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.565 [2024-07-26 12:25:58.655389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.565 [2024-07-26 12:25:58.655403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.565 [2024-07-26 12:25:58.655416] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.565 [2024-07-26 12:25:58.655444] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.565 qpair failed and we were unable to recover it. 00:25:05.565 [2024-07-26 12:25:58.665284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.565 [2024-07-26 12:25:58.665410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.665435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.665449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.665462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.665490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.675243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.675375] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.675406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.675421] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.675434] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.675462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.685267] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.685398] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.685423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.685437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.685450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.685477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.695305] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.695430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.695455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.695469] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.695483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.695510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.705380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.705559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.705587] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.705602] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.705615] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.705644] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.715433] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.715570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.715596] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.715611] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.715623] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.715657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.725371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.725498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.725524] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.725538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.725551] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.725579] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.735414] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.735534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.735559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.735573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.735586] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.735614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.745474] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.745597] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.745622] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.745636] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.745650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.745677] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.755456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.755584] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.755609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.755624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.755637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.755664] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.765542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.765679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.765708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.765724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.765737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.765764] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.775527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.775670] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.775695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.775710] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.775722] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.775750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.566 qpair failed and we were unable to recover it. 00:25:05.566 [2024-07-26 12:25:58.785615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.566 [2024-07-26 12:25:58.785772] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.566 [2024-07-26 12:25:58.785797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.566 [2024-07-26 12:25:58.785812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.566 [2024-07-26 12:25:58.785824] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.566 [2024-07-26 12:25:58.785852] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.567 qpair failed and we were unable to recover it. 00:25:05.567 [2024-07-26 12:25:58.795561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.567 [2024-07-26 12:25:58.795741] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.567 [2024-07-26 12:25:58.795766] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.567 [2024-07-26 12:25:58.795780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.567 [2024-07-26 12:25:58.795793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.567 [2024-07-26 12:25:58.795821] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.567 qpair failed and we were unable to recover it. 00:25:05.567 [2024-07-26 12:25:58.805602] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.567 [2024-07-26 12:25:58.805728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.567 [2024-07-26 12:25:58.805753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.567 [2024-07-26 12:25:58.805767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.567 [2024-07-26 12:25:58.805779] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.567 [2024-07-26 12:25:58.805812] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.567 qpair failed and we were unable to recover it. 00:25:05.567 [2024-07-26 12:25:58.815615] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.567 [2024-07-26 12:25:58.815739] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.567 [2024-07-26 12:25:58.815765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.567 [2024-07-26 12:25:58.815779] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.567 [2024-07-26 12:25:58.815792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.567 [2024-07-26 12:25:58.815819] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.567 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.825662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.825784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.825809] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.825823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.825836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.825863] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.835671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.835799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.835824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.835839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.835852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.835879] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.845696] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.845813] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.845839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.845854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.845866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.845894] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.855849] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.855984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.856014] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.856029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.856042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.856075] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.865818] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.865970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.865996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.866010] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.866023] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.866050] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.875842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.875990] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.876015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.876029] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.876042] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.876078] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.826 [2024-07-26 12:25:58.885908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.826 [2024-07-26 12:25:58.886093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.826 [2024-07-26 12:25:58.886118] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.826 [2024-07-26 12:25:58.886132] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.826 [2024-07-26 12:25:58.886145] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.826 [2024-07-26 12:25:58.886173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.826 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.895847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.895970] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.895995] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.896009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.896027] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.896055] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.905869] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.905987] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.906012] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.906027] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.906039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.906074] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.916021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.916199] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.916225] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.916240] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.916252] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.916279] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.925964] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.926100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.926125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.926139] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.926152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.926180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.936056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.936193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.936218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.936233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.936246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.936273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.946021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.946162] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.946189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.946203] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.946216] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.946243] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.956074] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.956225] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.956249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.956264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.956277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.956305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.966066] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.966198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.966222] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.966236] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.966248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.966275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.976083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.976204] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.976230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.976244] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.976257] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.976284] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.986092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.986213] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.986239] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.986253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.986271] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.986299] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:58.996175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:58.996304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:58.996329] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:58.996344] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:58.996357] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:58.996384] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:59.006141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:59.006272] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.827 [2024-07-26 12:25:59.006297] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.827 [2024-07-26 12:25:59.006312] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.827 [2024-07-26 12:25:59.006325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.827 [2024-07-26 12:25:59.006352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.827 qpair failed and we were unable to recover it. 00:25:05.827 [2024-07-26 12:25:59.016172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.827 [2024-07-26 12:25:59.016305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.016331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.016345] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.016358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.016386] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:05.828 [2024-07-26 12:25:59.026213] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.828 [2024-07-26 12:25:59.026342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.026367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.026381] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.026395] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.026423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:05.828 [2024-07-26 12:25:59.036259] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.828 [2024-07-26 12:25:59.036396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.036421] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.036436] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.036449] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.036476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:05.828 [2024-07-26 12:25:59.046268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.828 [2024-07-26 12:25:59.046397] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.046423] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.046437] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.046450] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.046477] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:05.828 [2024-07-26 12:25:59.056289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.828 [2024-07-26 12:25:59.056418] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.056443] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.056458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.056471] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.056498] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:05.828 [2024-07-26 12:25:59.066365] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.828 [2024-07-26 12:25:59.066493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.066519] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.066533] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.066546] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.066574] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:05.828 [2024-07-26 12:25:59.076364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:05.828 [2024-07-26 12:25:59.076496] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:05.828 [2024-07-26 12:25:59.076521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:05.828 [2024-07-26 12:25:59.076536] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:05.828 [2024-07-26 12:25:59.076557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:05.828 [2024-07-26 12:25:59.076586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:05.828 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.086368] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.086504] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.086529] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.086544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.086557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.086584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.096428] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.096552] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.096577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.096592] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.096605] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.096633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.106450] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.106577] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.106602] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.106617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.106630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.106657] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.116455] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.116588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.116613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.116627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.116641] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.116668] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.126481] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.126632] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.126659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.126673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.126687] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.126714] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.136527] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.136651] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.136677] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.136691] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.136704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.136731] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.146530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.146658] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.087 [2024-07-26 12:25:59.146684] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.087 [2024-07-26 12:25:59.146698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.087 [2024-07-26 12:25:59.146711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.087 [2024-07-26 12:25:59.146738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.087 qpair failed and we were unable to recover it. 00:25:06.087 [2024-07-26 12:25:59.156552] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.087 [2024-07-26 12:25:59.156682] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.156707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.156721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.156734] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.156762] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.166596] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.166728] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.166754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.166774] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.166787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.166815] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.176670] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.176827] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.176853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.176867] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.176880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.176908] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.186720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.186849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.186875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.186892] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.186908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.186937] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.196669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.196792] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.196817] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.196832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.196844] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.196872] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.206707] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.206838] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.206863] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.206877] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.206891] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.206919] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.216718] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.216836] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.216862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.216876] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.216889] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.216917] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.226754] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.226872] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.226897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.226912] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.226924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.226952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.236775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.236905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.236930] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.236944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.236957] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.236985] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.246799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.246926] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.246952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.246967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.246980] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.247008] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.256823] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.256946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.256970] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.256991] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.257005] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.257032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.266872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.267000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.267025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.267040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.267052] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.267092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.088 [2024-07-26 12:25:59.276909] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.088 [2024-07-26 12:25:59.277038] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.088 [2024-07-26 12:25:59.277068] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.088 [2024-07-26 12:25:59.277084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.088 [2024-07-26 12:25:59.277097] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.088 [2024-07-26 12:25:59.277125] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.088 qpair failed and we were unable to recover it. 00:25:06.089 [2024-07-26 12:25:59.286914] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.089 [2024-07-26 12:25:59.287047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.089 [2024-07-26 12:25:59.287080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.089 [2024-07-26 12:25:59.287096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.089 [2024-07-26 12:25:59.287108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.089 [2024-07-26 12:25:59.287136] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.089 qpair failed and we were unable to recover it. 00:25:06.089 [2024-07-26 12:25:59.296950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.089 [2024-07-26 12:25:59.297100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.089 [2024-07-26 12:25:59.297125] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.089 [2024-07-26 12:25:59.297140] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.089 [2024-07-26 12:25:59.297152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.089 [2024-07-26 12:25:59.297181] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.089 qpair failed and we were unable to recover it. 00:25:06.089 [2024-07-26 12:25:59.307002] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.089 [2024-07-26 12:25:59.307133] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.089 [2024-07-26 12:25:59.307159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.089 [2024-07-26 12:25:59.307173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.089 [2024-07-26 12:25:59.307186] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.089 [2024-07-26 12:25:59.307214] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.089 qpair failed and we were unable to recover it. 00:25:06.089 [2024-07-26 12:25:59.317072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.089 [2024-07-26 12:25:59.317202] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.089 [2024-07-26 12:25:59.317227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.089 [2024-07-26 12:25:59.317241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.089 [2024-07-26 12:25:59.317254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.089 [2024-07-26 12:25:59.317283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.089 qpair failed and we were unable to recover it. 00:25:06.089 [2024-07-26 12:25:59.327016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.089 [2024-07-26 12:25:59.327147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.089 [2024-07-26 12:25:59.327173] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.089 [2024-07-26 12:25:59.327187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.089 [2024-07-26 12:25:59.327200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.089 [2024-07-26 12:25:59.327228] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.089 qpair failed and we were unable to recover it. 00:25:06.089 [2024-07-26 12:25:59.337105] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.089 [2024-07-26 12:25:59.337228] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.089 [2024-07-26 12:25:59.337254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.089 [2024-07-26 12:25:59.337269] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.089 [2024-07-26 12:25:59.337282] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.089 [2024-07-26 12:25:59.337310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.089 qpair failed and we were unable to recover it. 00:25:06.348 [2024-07-26 12:25:59.347095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.348 [2024-07-26 12:25:59.347216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.348 [2024-07-26 12:25:59.347241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.348 [2024-07-26 12:25:59.347262] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.348 [2024-07-26 12:25:59.347275] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.348 [2024-07-26 12:25:59.347303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.348 qpair failed and we were unable to recover it. 00:25:06.348 [2024-07-26 12:25:59.357117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.357250] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.357275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.357289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.357302] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.357330] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.367175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.367349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.367374] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.367388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.367401] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.367427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.377193] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.377317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.377342] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.377356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.377369] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.377396] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.387223] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.387345] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.387369] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.387384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.387396] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.387423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.397287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.397420] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.397445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.397459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.397472] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.397500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.407307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.407461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.407486] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.407501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.407513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.407541] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.417381] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.417540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.417565] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.417580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.417593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.417621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.427350] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.427480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.427505] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.427520] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.427533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.427561] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.437383] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.437545] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.437575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.437590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.437603] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.437630] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.447398] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.447535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.447560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.447575] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.447587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.447615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.457496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.457624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.457649] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.457663] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.457676] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.457703] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.467493] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.467626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.467651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.467665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.349 [2024-07-26 12:25:59.467678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.349 [2024-07-26 12:25:59.467705] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.349 qpair failed and we were unable to recover it. 00:25:06.349 [2024-07-26 12:25:59.477491] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.349 [2024-07-26 12:25:59.477620] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.349 [2024-07-26 12:25:59.477645] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.349 [2024-07-26 12:25:59.477659] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.477672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.477706] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.487568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.487705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.487731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.487745] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.487758] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.487785] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.497548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.497675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.497701] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.497715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.497727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.497755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.507585] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.507730] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.507756] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.507770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.507783] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.507811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.517594] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.517726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.517750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.517765] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.517778] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.517805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.527610] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.527750] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.527780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.527795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.527808] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.527836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.537644] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.537774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.537800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.537814] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.537827] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.537855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.547705] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.547820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.547846] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.547861] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.547874] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.547901] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.557800] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.557930] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.557955] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.557969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.557982] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.558010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.567735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.567866] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.567891] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.567905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.567918] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.567952] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.577765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.577931] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.577956] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.577970] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.577983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.578010] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.587876] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.588000] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.588025] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.588040] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.588053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.588091] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.350 [2024-07-26 12:25:59.597843] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.350 [2024-07-26 12:25:59.597978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.350 [2024-07-26 12:25:59.598003] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.350 [2024-07-26 12:25:59.598017] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.350 [2024-07-26 12:25:59.598030] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.350 [2024-07-26 12:25:59.598057] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.350 qpair failed and we were unable to recover it. 00:25:06.610 [2024-07-26 12:25:59.607874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.610 [2024-07-26 12:25:59.608013] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.610 [2024-07-26 12:25:59.608038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.610 [2024-07-26 12:25:59.608053] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.610 [2024-07-26 12:25:59.608073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.610 [2024-07-26 12:25:59.608101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.610 qpair failed and we were unable to recover it. 00:25:06.610 [2024-07-26 12:25:59.617885] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.610 [2024-07-26 12:25:59.618006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.610 [2024-07-26 12:25:59.618037] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.610 [2024-07-26 12:25:59.618052] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.610 [2024-07-26 12:25:59.618072] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.610 [2024-07-26 12:25:59.618101] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.610 qpair failed and we were unable to recover it. 00:25:06.610 [2024-07-26 12:25:59.627919] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.610 [2024-07-26 12:25:59.628055] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.610 [2024-07-26 12:25:59.628089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.610 [2024-07-26 12:25:59.628103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.610 [2024-07-26 12:25:59.628116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.610 [2024-07-26 12:25:59.628144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.610 qpair failed and we were unable to recover it. 00:25:06.610 [2024-07-26 12:25:59.637927] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.610 [2024-07-26 12:25:59.638064] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.610 [2024-07-26 12:25:59.638089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.610 [2024-07-26 12:25:59.638103] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.610 [2024-07-26 12:25:59.638116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.610 [2024-07-26 12:25:59.638144] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.610 qpair failed and we were unable to recover it. 00:25:06.610 [2024-07-26 12:25:59.647947] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.648093] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.648119] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.648133] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.648146] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.648174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.658006] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.658144] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.658169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.658183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.658196] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.658229] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.668130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.668275] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.668301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.668316] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.668329] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.668357] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.678094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.678232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.678256] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.678271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.678283] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.678313] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.688049] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.688193] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.688218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.688233] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.688246] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.688273] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.698102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.698262] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.698288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.698303] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.698322] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.698351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.708124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.708251] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.708282] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.708298] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.708310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.708338] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.718246] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.718391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.718416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.718430] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.718443] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.718471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.728175] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.728300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.728325] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.728339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.728354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.728382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.738188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.738310] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.738335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.738350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.738362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.738390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.748311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.748430] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.748455] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.748470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.748491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.748520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.758271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.758410] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.758435] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.758449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.611 [2024-07-26 12:25:59.758462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.611 [2024-07-26 12:25:59.758489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.611 qpair failed and we were unable to recover it. 00:25:06.611 [2024-07-26 12:25:59.768412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.611 [2024-07-26 12:25:59.768540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.611 [2024-07-26 12:25:59.768566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.611 [2024-07-26 12:25:59.768580] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.768593] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.768620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.778353] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.778524] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.778549] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.778564] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.778577] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.778605] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.788378] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.788523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.788548] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.788563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.788575] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.788602] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.798389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.798535] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.798560] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.798574] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.798587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.798614] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.808506] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.808637] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.808662] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.808676] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.808689] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.808716] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.818522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.818686] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.818711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.818726] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.818739] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.818767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.828510] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.828630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.828655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.828669] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.828682] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.828709] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.838500] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.838626] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.838651] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.838665] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.838683] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.838711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.848548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.848680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.848705] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.848720] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.848733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.848761] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.612 [2024-07-26 12:25:59.858554] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.612 [2024-07-26 12:25:59.858675] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.612 [2024-07-26 12:25:59.858700] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.612 [2024-07-26 12:25:59.858715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.612 [2024-07-26 12:25:59.858727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.612 [2024-07-26 12:25:59.858756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.612 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.868609] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.868729] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.868754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.868769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.868782] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.868811] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.878717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.878842] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.878866] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.878881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.878894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.878922] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.888646] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.888779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.888805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.888820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.888832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.888860] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.898770] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.898891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.898916] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.898931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.898943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.898971] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.908704] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.908877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.908903] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.908917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.908930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.908958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.918735] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.918863] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.918888] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.918903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.918915] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.918942] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.928775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.928898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.928924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.928944] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.928958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.928988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.938834] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.938981] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.939006] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.939020] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.939033] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.939066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.948811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.948934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.948960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.948975] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.948988] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.949015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.958854] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.958984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.959010] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.959025] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.959038] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.959071] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.968897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.969025] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.969049] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.969070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.969083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.969110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.978936] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.979085] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.979111] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.979125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.979138] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.979166] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.988938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.989071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.989097] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.989111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.989124] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.989152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:25:59.999124] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:25:59.999255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:25:59.999279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:25:59.999293] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:25:59.999306] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:25:59.999334] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:26:00.009054] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:26:00.009233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:26:00.009259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:26:00.009273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:26:00.009286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:26:00.009314] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:26:00.019057] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:26:00.019201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:26:00.019229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:26:00.019252] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:26:00.019267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:26:00.019298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.874 qpair failed and we were unable to recover it. 00:25:06.874 [2024-07-26 12:26:00.029092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.874 [2024-07-26 12:26:00.029226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.874 [2024-07-26 12:26:00.029252] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.874 [2024-07-26 12:26:00.029267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.874 [2024-07-26 12:26:00.029280] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.874 [2024-07-26 12:26:00.029311] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.039117] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.039299] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.039328] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.039347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.039361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.039391] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.049154] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.049311] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.049338] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.049357] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.049370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.049399] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.059130] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.059253] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.059279] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.059295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.059308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.059336] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.069187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.069316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.069344] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.069361] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.069376] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.069405] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.079229] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.079376] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.079402] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.079417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.079431] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.079459] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.089225] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.089349] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.089375] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.089390] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.089403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.089431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.099264] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.099388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.099413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.099428] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.099440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.099468] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.109268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.109388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.109413] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.109434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.109447] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.109475] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:06.875 [2024-07-26 12:26:00.119320] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:06.875 [2024-07-26 12:26:00.119447] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:06.875 [2024-07-26 12:26:00.119472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:06.875 [2024-07-26 12:26:00.119486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:06.875 [2024-07-26 12:26:00.119500] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:06.875 [2024-07-26 12:26:00.119527] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:06.875 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.129370] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.129491] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.129516] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.129531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.129544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:07.136 [2024-07-26 12:26:00.129572] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.139384] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.139506] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.139532] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.139546] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.139559] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:07.136 [2024-07-26 12:26:00.139586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.149475] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.149601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.149627] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.149641] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.149655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:07.136 [2024-07-26 12:26:00.149682] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.159446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.159573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.159597] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.159612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.159625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:07.136 [2024-07-26 12:26:00.159653] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.169517] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.169645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.169672] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.169687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.169700] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x21bf250 00:25:07.136 [2024-07-26 12:26:00.169728] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.179498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.179645] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.179680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.179699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.179714] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4f0000b90 00:25:07.136 [2024-07-26 12:26:00.179746] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.189537] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.189680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.189707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.189723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.189737] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4f0000b90 00:25:07.136 [2024-07-26 12:26:00.189767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.199555] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.199687] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.199725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.199743] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.199757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb500000b90 00:25:07.136 [2024-07-26 12:26:00.199801] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.209578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.209724] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.209751] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.209767] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.209780] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb500000b90 00:25:07.136 [2024-07-26 12:26:00.209810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.209944] nvme_ctrlr.c:4480:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:25:07.136 A controller has encountered a failure and is being reset. 00:25:07.136 [2024-07-26 12:26:00.219600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.136 [2024-07-26 12:26:00.219779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.136 [2024-07-26 12:26:00.219814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.136 [2024-07-26 12:26:00.219832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.136 [2024-07-26 12:26:00.219847] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4f8000b90 00:25:07.136 [2024-07-26 12:26:00.219882] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.136 qpair failed and we were unable to recover it. 00:25:07.136 [2024-07-26 12:26:00.229645] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:25:07.137 [2024-07-26 12:26:00.229768] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:25:07.137 [2024-07-26 12:26:00.229796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:25:07.137 [2024-07-26 12:26:00.229812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:25:07.137 [2024-07-26 12:26:00.229826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7fb4f8000b90 00:25:07.137 [2024-07-26 12:26:00.229856] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:25:07.137 qpair failed and we were unable to recover it. 00:25:07.395 Controller properly reset. 00:25:07.395 Initializing NVMe Controllers 00:25:07.395 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:25:07.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:25:07.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:25:07.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:25:07.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:25:07.395 Initialization complete. Launching workers. 00:25:07.395 Starting thread on core 1 00:25:07.395 Starting thread on core 2 00:25:07.395 Starting thread on core 3 00:25:07.395 Starting thread on core 0 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:25:07.395 00:25:07.395 real 0m10.966s 00:25:07.395 user 0m18.000s 00:25:07.395 sys 0m5.604s 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:25:07.395 ************************************ 00:25:07.395 END TEST nvmf_target_disconnect_tc2 00:25:07.395 ************************************ 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:07.395 rmmod nvme_tcp 00:25:07.395 rmmod nvme_fabrics 00:25:07.395 rmmod nvme_keyring 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 2979286 ']' 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 2979286 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@950 -- # '[' -z 2979286 ']' 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # kill -0 2979286 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # uname 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2979286 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@956 -- # process_name=reactor_4 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # '[' reactor_4 = sudo ']' 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2979286' 00:25:07.395 killing process with pid 2979286 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@969 -- # kill 2979286 00:25:07.395 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@974 -- # wait 2979286 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:07.653 12:26:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:10.185 12:26:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:10.185 00:25:10.185 real 0m15.715s 00:25:10.185 user 0m44.874s 00:25:10.185 sys 0m7.536s 00:25:10.185 12:26:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:10.185 12:26:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 ************************************ 00:25:10.185 END TEST nvmf_target_disconnect 00:25:10.185 ************************************ 00:25:10.185 12:26:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:25:10.185 00:25:10.185 real 5m5.090s 00:25:10.185 user 10m42.826s 00:25:10.185 sys 1m12.100s 00:25:10.185 12:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:10.185 12:26:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 ************************************ 00:25:10.185 END TEST nvmf_host 00:25:10.185 ************************************ 00:25:10.185 00:25:10.185 real 19m37.758s 00:25:10.185 user 46m10.267s 00:25:10.185 sys 4m57.391s 00:25:10.185 12:26:02 nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:10.185 12:26:02 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 ************************************ 00:25:10.185 END TEST nvmf_tcp 00:25:10.185 ************************************ 00:25:10.185 12:26:02 -- spdk/autotest.sh@292 -- # [[ 0 -eq 0 ]] 00:25:10.185 12:26:02 -- spdk/autotest.sh@293 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:10.185 12:26:02 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:10.185 12:26:02 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:10.185 12:26:02 -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 ************************************ 00:25:10.185 START TEST spdkcli_nvmf_tcp 00:25:10.185 ************************************ 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:25:10.185 * Looking for test storage... 00:25:10.185 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:10.185 12:26:02 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=2980487 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 2980487 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@831 -- # '[' -z 2980487 ']' 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:10.185 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 [2024-07-26 12:26:03.055288] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:25:10.185 [2024-07-26 12:26:03.055383] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid2980487 ] 00:25:10.185 EAL: No free 2048 kB hugepages reported on node 1 00:25:10.185 [2024-07-26 12:26:03.111913] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:25:10.185 [2024-07-26 12:26:03.234078] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:10.185 [2024-07-26 12:26:03.234085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # return 0 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:25:10.185 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:25:10.186 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:25:10.186 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:10.186 12:26:03 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:10.186 12:26:03 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:25:10.186 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:25:10.186 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:25:10.186 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:25:10.186 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:25:10.186 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:25:10.186 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:25:10.186 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:10.186 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:10.186 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:25:10.186 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:25:10.186 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:25:10.186 ' 00:25:12.719 [2024-07-26 12:26:05.929512] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:14.097 [2024-07-26 12:26:07.165856] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:25:16.635 [2024-07-26 12:26:09.452997] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:25:18.539 [2024-07-26 12:26:11.459638] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:25:19.913 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:25:19.913 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:25:19.913 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:25:19.913 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:25:19.913 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:25:19.913 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:25:19.913 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:25:19.913 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.913 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.913 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:25:19.913 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:25:19.913 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:25:19.913 12:26:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:20.484 12:26:13 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:25:20.484 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:25:20.484 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.484 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:25:20.484 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:25:20.484 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:25:20.484 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:25:20.484 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:25:20.484 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:25:20.484 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:25:20.484 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:25:20.484 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:25:20.484 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:25:20.484 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:25:20.484 ' 00:25:25.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:25:25.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:25:25.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:25.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:25:25.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:25:25.789 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:25:25.789 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:25:25.789 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:25:25.789 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:25:25.789 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:25:25.789 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:25:25.789 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:25:25.789 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:25:25.789 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 2980487 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2980487 ']' 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2980487 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # uname 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2980487 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2980487' 00:25:25.789 killing process with pid 2980487 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@969 -- # kill 2980487 00:25:25.789 12:26:18 spdkcli_nvmf_tcp -- common/autotest_common.sh@974 -- # wait 2980487 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 2980487 ']' 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 2980487 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@950 -- # '[' -z 2980487 ']' 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # kill -0 2980487 00:25:26.049 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2980487) - No such process 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@977 -- # echo 'Process with pid 2980487 is not found' 00:25:26.049 Process with pid 2980487 is not found 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:25:26.049 00:25:26.049 real 0m16.208s 00:25:26.049 user 0m34.335s 00:25:26.049 sys 0m0.813s 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:26.049 12:26:19 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:25:26.049 ************************************ 00:25:26.049 END TEST spdkcli_nvmf_tcp 00:25:26.049 ************************************ 00:25:26.049 12:26:19 -- spdk/autotest.sh@294 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:26.049 12:26:19 -- common/autotest_common.sh@1101 -- # '[' 3 -le 1 ']' 00:25:26.049 12:26:19 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:26.049 12:26:19 -- common/autotest_common.sh@10 -- # set +x 00:25:26.049 ************************************ 00:25:26.049 START TEST nvmf_identify_passthru 00:25:26.049 ************************************ 00:25:26.049 12:26:19 nvmf_identify_passthru -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:25:26.049 * Looking for test storage... 00:25:26.049 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:26.049 12:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:26.049 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:25:26.049 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.050 12:26:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.050 12:26:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.050 12:26:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:26.050 12:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:26.050 12:26:19 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:26.050 12:26:19 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:26.050 12:26:19 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:25:26.050 12:26:19 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:26.050 12:26:19 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:26.050 12:26:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:26.050 12:26:19 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:26.050 12:26:19 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:25:26.050 12:26:19 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:25:28.585 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:28.586 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:28.586 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:28.586 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:28.586 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:28.586 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:28.586 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.200 ms 00:25:28.586 00:25:28.586 --- 10.0.0.2 ping statistics --- 00:25:28.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.586 rtt min/avg/max/mdev = 0.200/0.200/0.200/0.000 ms 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:28.586 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:28.586 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.115 ms 00:25:28.586 00:25:28.586 --- 10.0.0.1 ping statistics --- 00:25:28.586 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:28.586 rtt min/avg/max/mdev = 0.115/0.115/0.115/0.000 ms 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:28.586 12:26:21 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:28.586 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:28.586 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:25:28.586 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:25:28.587 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:88:00.0 00:25:28.587 12:26:21 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:88:00.0 00:25:28.587 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:88:00.0 00:25:28.587 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:88:00.0 ']' 00:25:28.587 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:28.587 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:25:28.587 12:26:21 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:25:28.587 EAL: No free 2048 kB hugepages reported on node 1 00:25:32.776 12:26:25 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLJ916004901P0FGN 00:25:32.776 12:26:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:88:00.0' -i 0 00:25:32.776 12:26:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:25:32.776 12:26:25 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:25:32.776 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=2985114 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:25:36.965 12:26:29 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 2985114 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@831 -- # '[' -z 2985114 ']' 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:36.965 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:36.965 12:26:29 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 [2024-07-26 12:26:29.911480] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:25:36.965 [2024-07-26 12:26:29.911579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:36.965 EAL: No free 2048 kB hugepages reported on node 1 00:25:36.965 [2024-07-26 12:26:29.978249] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:25:36.965 [2024-07-26 12:26:30.092959] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:36.965 [2024-07-26 12:26:30.093017] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:36.965 [2024-07-26 12:26:30.093045] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:36.965 [2024-07-26 12:26:30.093057] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:36.965 [2024-07-26 12:26:30.093075] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:36.965 [2024-07-26 12:26:30.093140] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:25:36.965 [2024-07-26 12:26:30.093164] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:25:36.965 [2024-07-26 12:26:30.093187] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:25:36.965 [2024-07-26 12:26:30.093190] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # return 0 00:25:36.965 12:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 INFO: Log level set to 20 00:25:36.965 INFO: Requests: 00:25:36.965 { 00:25:36.965 "jsonrpc": "2.0", 00:25:36.965 "method": "nvmf_set_config", 00:25:36.965 "id": 1, 00:25:36.965 "params": { 00:25:36.965 "admin_cmd_passthru": { 00:25:36.965 "identify_ctrlr": true 00:25:36.965 } 00:25:36.965 } 00:25:36.965 } 00:25:36.965 00:25:36.965 INFO: response: 00:25:36.965 { 00:25:36.965 "jsonrpc": "2.0", 00:25:36.965 "id": 1, 00:25:36.965 "result": true 00:25:36.965 } 00:25:36.965 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:36.965 12:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:36.965 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:36.965 INFO: Setting log level to 20 00:25:36.965 INFO: Setting log level to 20 00:25:36.965 INFO: Log level set to 20 00:25:36.965 INFO: Log level set to 20 00:25:36.965 INFO: Requests: 00:25:36.965 { 00:25:36.965 "jsonrpc": "2.0", 00:25:36.965 "method": "framework_start_init", 00:25:36.965 "id": 1 00:25:36.965 } 00:25:36.965 00:25:36.965 INFO: Requests: 00:25:36.965 { 00:25:36.965 "jsonrpc": "2.0", 00:25:36.965 "method": "framework_start_init", 00:25:36.965 "id": 1 00:25:36.965 } 00:25:36.965 00:25:37.223 [2024-07-26 12:26:30.236302] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:25:37.223 INFO: response: 00:25:37.223 { 00:25:37.223 "jsonrpc": "2.0", 00:25:37.223 "id": 1, 00:25:37.223 "result": true 00:25:37.223 } 00:25:37.223 00:25:37.223 INFO: response: 00:25:37.223 { 00:25:37.223 "jsonrpc": "2.0", 00:25:37.223 "id": 1, 00:25:37.223 "result": true 00:25:37.223 } 00:25:37.223 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.223 12:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.223 INFO: Setting log level to 40 00:25:37.223 INFO: Setting log level to 40 00:25:37.223 INFO: Setting log level to 40 00:25:37.223 [2024-07-26 12:26:30.246469] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:37.223 12:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:37.223 12:26:30 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:88:00.0 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:37.223 12:26:30 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.511 Nvme0n1 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.511 [2024-07-26 12:26:33.137822] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.511 [ 00:25:40.511 { 00:25:40.511 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:25:40.511 "subtype": "Discovery", 00:25:40.511 "listen_addresses": [], 00:25:40.511 "allow_any_host": true, 00:25:40.511 "hosts": [] 00:25:40.511 }, 00:25:40.511 { 00:25:40.511 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:25:40.511 "subtype": "NVMe", 00:25:40.511 "listen_addresses": [ 00:25:40.511 { 00:25:40.511 "trtype": "TCP", 00:25:40.511 "adrfam": "IPv4", 00:25:40.511 "traddr": "10.0.0.2", 00:25:40.511 "trsvcid": "4420" 00:25:40.511 } 00:25:40.511 ], 00:25:40.511 "allow_any_host": true, 00:25:40.511 "hosts": [], 00:25:40.511 "serial_number": "SPDK00000000000001", 00:25:40.511 "model_number": "SPDK bdev Controller", 00:25:40.511 "max_namespaces": 1, 00:25:40.511 "min_cntlid": 1, 00:25:40.511 "max_cntlid": 65519, 00:25:40.511 "namespaces": [ 00:25:40.511 { 00:25:40.511 "nsid": 1, 00:25:40.511 "bdev_name": "Nvme0n1", 00:25:40.511 "name": "Nvme0n1", 00:25:40.511 "nguid": "5583E941244C420D8B232FFEA3F92B1B", 00:25:40.511 "uuid": "5583e941-244c-420d-8b23-2ffea3f92b1b" 00:25:40.511 } 00:25:40.511 ] 00:25:40.511 } 00:25:40.511 ] 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:25:40.511 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLJ916004901P0FGN 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:25:40.511 EAL: No free 2048 kB hugepages reported on node 1 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLJ916004901P0FGN '!=' PHLJ916004901P0FGN ']' 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:40.511 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:25:40.511 12:26:33 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:40.511 rmmod nvme_tcp 00:25:40.511 rmmod nvme_fabrics 00:25:40.511 rmmod nvme_keyring 00:25:40.511 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:40.512 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:25:40.512 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:25:40.512 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 2985114 ']' 00:25:40.512 12:26:33 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 2985114 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@950 -- # '[' -z 2985114 ']' 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # kill -0 2985114 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # uname 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2985114 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2985114' 00:25:40.512 killing process with pid 2985114 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@969 -- # kill 2985114 00:25:40.512 12:26:33 nvmf_identify_passthru -- common/autotest_common.sh@974 -- # wait 2985114 00:25:41.889 12:26:35 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:41.889 12:26:35 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:41.889 12:26:35 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:41.889 12:26:35 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:41.889 12:26:35 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:41.889 12:26:35 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:41.889 12:26:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:41.889 12:26:35 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.429 12:26:37 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:44.429 00:25:44.429 real 0m17.958s 00:25:44.429 user 0m26.401s 00:25:44.429 sys 0m2.305s 00:25:44.429 12:26:37 nvmf_identify_passthru -- common/autotest_common.sh@1126 -- # xtrace_disable 00:25:44.429 12:26:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:25:44.429 ************************************ 00:25:44.429 END TEST nvmf_identify_passthru 00:25:44.429 ************************************ 00:25:44.429 12:26:37 -- spdk/autotest.sh@296 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:44.429 12:26:37 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:44.429 12:26:37 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:44.429 12:26:37 -- common/autotest_common.sh@10 -- # set +x 00:25:44.429 ************************************ 00:25:44.429 START TEST nvmf_dif 00:25:44.429 ************************************ 00:25:44.429 12:26:37 nvmf_dif -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:25:44.429 * Looking for test storage... 00:25:44.429 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:25:44.429 12:26:37 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:44.429 12:26:37 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:44.429 12:26:37 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:44.429 12:26:37 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:44.429 12:26:37 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.429 12:26:37 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.429 12:26:37 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.429 12:26:37 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:25:44.429 12:26:37 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:44.429 12:26:37 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:25:44.429 12:26:37 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:25:44.429 12:26:37 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:25:44.429 12:26:37 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:25:44.429 12:26:37 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:44.429 12:26:37 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:25:44.429 12:26:37 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:44.429 12:26:37 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:25:44.429 12:26:37 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:46.335 12:26:39 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:25:46.336 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:25:46.336 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:25:46.336 Found net devices under 0000:0a:00.0: cvl_0_0 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:25:46.336 Found net devices under 0000:0a:00.1: cvl_0_1 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:46.336 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:46.336 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.140 ms 00:25:46.336 00:25:46.336 --- 10.0.0.2 ping statistics --- 00:25:46.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.336 rtt min/avg/max/mdev = 0.140/0.140/0.140/0.000 ms 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:46.336 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:46.336 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.100 ms 00:25:46.336 00:25:46.336 --- 10.0.0.1 ping statistics --- 00:25:46.336 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:46.336 rtt min/avg/max/mdev = 0.100/0.100/0.100/0.000 ms 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:25:46.336 12:26:39 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:47.274 0000:00:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:47.274 0000:88:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:47.274 0000:00:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:47.274 0000:00:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:47.274 0000:00:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:47.274 0000:00:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:47.274 0000:00:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:47.274 0000:00:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:47.274 0000:00:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:47.274 0000:80:04.7 (8086 0e27): Already using the vfio-pci driver 00:25:47.274 0000:80:04.6 (8086 0e26): Already using the vfio-pci driver 00:25:47.274 0000:80:04.5 (8086 0e25): Already using the vfio-pci driver 00:25:47.274 0000:80:04.4 (8086 0e24): Already using the vfio-pci driver 00:25:47.274 0000:80:04.3 (8086 0e23): Already using the vfio-pci driver 00:25:47.274 0000:80:04.2 (8086 0e22): Already using the vfio-pci driver 00:25:47.274 0000:80:04.1 (8086 0e21): Already using the vfio-pci driver 00:25:47.274 0000:80:04.0 (8086 0e20): Already using the vfio-pci driver 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:47.533 12:26:40 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:25:47.533 12:26:40 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:47.533 12:26:40 nvmf_dif -- common/autotest_common.sh@724 -- # xtrace_disable 00:25:47.533 12:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=2988270 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:25:47.533 12:26:40 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 2988270 00:25:47.533 12:26:40 nvmf_dif -- common/autotest_common.sh@831 -- # '[' -z 2988270 ']' 00:25:47.534 12:26:40 nvmf_dif -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:47.534 12:26:40 nvmf_dif -- common/autotest_common.sh@836 -- # local max_retries=100 00:25:47.534 12:26:40 nvmf_dif -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:47.534 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:47.534 12:26:40 nvmf_dif -- common/autotest_common.sh@840 -- # xtrace_disable 00:25:47.534 12:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.534 [2024-07-26 12:26:40.619699] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:25:47.534 [2024-07-26 12:26:40.619785] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:25:47.534 EAL: No free 2048 kB hugepages reported on node 1 00:25:47.534 [2024-07-26 12:26:40.684276] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:47.792 [2024-07-26 12:26:40.793960] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:25:47.792 [2024-07-26 12:26:40.794015] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:25:47.792 [2024-07-26 12:26:40.794029] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:25:47.792 [2024-07-26 12:26:40.794041] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:25:47.792 [2024-07-26 12:26:40.794051] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:25:47.792 [2024-07-26 12:26:40.794098] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@864 -- # return 0 00:25:47.792 12:26:40 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@730 -- # xtrace_disable 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.792 12:26:40 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:47.792 12:26:40 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:25:47.792 12:26:40 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.792 [2024-07-26 12:26:40.940699] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.792 12:26:40 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:25:47.792 12:26:40 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:25:47.792 ************************************ 00:25:47.792 START TEST fio_dif_1_default 00:25:47.792 ************************************ 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1125 -- # fio_dif_1 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.792 bdev_null0 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:25:47.792 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.793 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.793 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.793 12:26:40 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:25:47.793 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:25:47.793 12:26:40 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:25:47.793 [2024-07-26 12:26:41.001026] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:25:47.793 { 00:25:47.793 "params": { 00:25:47.793 "name": "Nvme$subsystem", 00:25:47.793 "trtype": "$TEST_TRANSPORT", 00:25:47.793 "traddr": "$NVMF_FIRST_TARGET_IP", 00:25:47.793 "adrfam": "ipv4", 00:25:47.793 "trsvcid": "$NVMF_PORT", 00:25:47.793 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:25:47.793 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:25:47.793 "hdgst": ${hdgst:-false}, 00:25:47.793 "ddgst": ${ddgst:-false} 00:25:47.793 }, 00:25:47.793 "method": "bdev_nvme_attach_controller" 00:25:47.793 } 00:25:47.793 EOF 00:25:47.793 )") 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:25:47.793 "params": { 00:25:47.793 "name": "Nvme0", 00:25:47.793 "trtype": "tcp", 00:25:47.793 "traddr": "10.0.0.2", 00:25:47.793 "adrfam": "ipv4", 00:25:47.793 "trsvcid": "4420", 00:25:47.793 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:25:47.793 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:25:47.793 "hdgst": false, 00:25:47.793 "ddgst": false 00:25:47.793 }, 00:25:47.793 "method": "bdev_nvme_attach_controller" 00:25:47.793 }' 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:25:47.793 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:25:48.052 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:25:48.052 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:25:48.052 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:25:48.052 12:26:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:25:48.052 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:25:48.052 fio-3.35 00:25:48.052 Starting 1 thread 00:25:48.052 EAL: No free 2048 kB hugepages reported on node 1 00:26:00.252 00:26:00.252 filename0: (groupid=0, jobs=1): err= 0: pid=2988501: Fri Jul 26 12:26:51 2024 00:26:00.252 read: IOPS=189, BW=759KiB/s (777kB/s)(7616KiB/10040msec) 00:26:00.252 slat (usec): min=5, max=101, avg= 8.49, stdev= 3.48 00:26:00.252 clat (usec): min=784, max=44410, avg=21065.03, stdev=20139.20 00:26:00.252 lat (usec): min=792, max=44441, avg=21073.52, stdev=20139.00 00:26:00.252 clat percentiles (usec): 00:26:00.252 | 1.00th=[ 816], 5.00th=[ 824], 10.00th=[ 824], 20.00th=[ 840], 00:26:00.252 | 30.00th=[ 848], 40.00th=[ 857], 50.00th=[41157], 60.00th=[41157], 00:26:00.252 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:00.252 | 99.00th=[41157], 99.50th=[41157], 99.90th=[44303], 99.95th=[44303], 00:26:00.252 | 99.99th=[44303] 00:26:00.252 bw ( KiB/s): min= 672, max= 768, per=100.00%, avg=760.00, stdev=25.16, samples=20 00:26:00.252 iops : min= 168, max= 192, avg=190.00, stdev= 6.29, samples=20 00:26:00.252 lat (usec) : 1000=49.79% 00:26:00.252 lat (msec) : 50=50.21% 00:26:00.252 cpu : usr=89.73%, sys=9.99%, ctx=22, majf=0, minf=251 00:26:00.252 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:00.252 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.252 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:00.252 issued rwts: total=1904,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:00.252 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:00.252 00:26:00.252 Run status group 0 (all jobs): 00:26:00.252 READ: bw=759KiB/s (777kB/s), 759KiB/s-759KiB/s (777kB/s-777kB/s), io=7616KiB (7799kB), run=10040-10040msec 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.252 00:26:00.252 real 0m11.315s 00:26:00.252 user 0m10.353s 00:26:00.252 sys 0m1.293s 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 ************************************ 00:26:00.252 END TEST fio_dif_1_default 00:26:00.252 ************************************ 00:26:00.252 12:26:52 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:26:00.252 12:26:52 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:00.252 12:26:52 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 ************************************ 00:26:00.252 START TEST fio_dif_1_multi_subsystems 00:26:00.252 ************************************ 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1125 -- # fio_dif_1_multi_subsystems 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 bdev_null0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 [2024-07-26 12:26:52.365035] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.252 bdev_null1 00:26:00.252 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.253 { 00:26:00.253 "params": { 00:26:00.253 "name": "Nvme$subsystem", 00:26:00.253 "trtype": "$TEST_TRANSPORT", 00:26:00.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.253 "adrfam": "ipv4", 00:26:00.253 "trsvcid": "$NVMF_PORT", 00:26:00.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.253 "hdgst": ${hdgst:-false}, 00:26:00.253 "ddgst": ${ddgst:-false} 00:26:00.253 }, 00:26:00.253 "method": "bdev_nvme_attach_controller" 00:26:00.253 } 00:26:00.253 EOF 00:26:00.253 )") 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:00.253 { 00:26:00.253 "params": { 00:26:00.253 "name": "Nvme$subsystem", 00:26:00.253 "trtype": "$TEST_TRANSPORT", 00:26:00.253 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.253 "adrfam": "ipv4", 00:26:00.253 "trsvcid": "$NVMF_PORT", 00:26:00.253 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.253 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.253 "hdgst": ${hdgst:-false}, 00:26:00.253 "ddgst": ${ddgst:-false} 00:26:00.253 }, 00:26:00.253 "method": "bdev_nvme_attach_controller" 00:26:00.253 } 00:26:00.253 EOF 00:26:00.253 )") 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:00.253 "params": { 00:26:00.253 "name": "Nvme0", 00:26:00.253 "trtype": "tcp", 00:26:00.253 "traddr": "10.0.0.2", 00:26:00.253 "adrfam": "ipv4", 00:26:00.253 "trsvcid": "4420", 00:26:00.253 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:00.253 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:00.253 "hdgst": false, 00:26:00.253 "ddgst": false 00:26:00.253 }, 00:26:00.253 "method": "bdev_nvme_attach_controller" 00:26:00.253 },{ 00:26:00.253 "params": { 00:26:00.253 "name": "Nvme1", 00:26:00.253 "trtype": "tcp", 00:26:00.253 "traddr": "10.0.0.2", 00:26:00.253 "adrfam": "ipv4", 00:26:00.253 "trsvcid": "4420", 00:26:00.253 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.253 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.253 "hdgst": false, 00:26:00.253 "ddgst": false 00:26:00.253 }, 00:26:00.253 "method": "bdev_nvme_attach_controller" 00:26:00.253 }' 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:00.253 12:26:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:00.253 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.253 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:26:00.253 fio-3.35 00:26:00.253 Starting 2 threads 00:26:00.253 EAL: No free 2048 kB hugepages reported on node 1 00:26:10.219 00:26:10.219 filename0: (groupid=0, jobs=1): err= 0: pid=2990024: Fri Jul 26 12:27:03 2024 00:26:10.219 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10003msec) 00:26:10.219 slat (nsec): min=7007, max=54954, avg=9078.30, stdev=3244.52 00:26:10.219 clat (usec): min=40861, max=43017, avg=41303.22, stdev=515.14 00:26:10.219 lat (usec): min=40880, max=43031, avg=41312.30, stdev=515.36 00:26:10.219 clat percentiles (usec): 00:26:10.219 | 1.00th=[41157], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:26:10.219 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:26:10.219 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:26:10.219 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:26:10.219 | 99.99th=[43254] 00:26:10.219 bw ( KiB/s): min= 352, max= 416, per=33.79%, avg=387.37, stdev=14.68, samples=19 00:26:10.219 iops : min= 88, max= 104, avg=96.84, stdev= 3.67, samples=19 00:26:10.219 lat (msec) : 50=100.00% 00:26:10.219 cpu : usr=94.68%, sys=5.02%, ctx=16, majf=0, minf=128 00:26:10.219 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.219 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.219 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.219 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.220 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.220 filename1: (groupid=0, jobs=1): err= 0: pid=2990025: Fri Jul 26 12:27:03 2024 00:26:10.220 read: IOPS=189, BW=758KiB/s (776kB/s)(7584KiB/10002msec) 00:26:10.220 slat (nsec): min=6939, max=68643, avg=8813.23, stdev=3168.86 00:26:10.220 clat (usec): min=740, max=42278, avg=21072.39, stdev=20154.16 00:26:10.220 lat (usec): min=747, max=42332, avg=21081.21, stdev=20154.03 00:26:10.220 clat percentiles (usec): 00:26:10.220 | 1.00th=[ 783], 5.00th=[ 791], 10.00th=[ 799], 20.00th=[ 816], 00:26:10.220 | 30.00th=[ 824], 40.00th=[ 865], 50.00th=[41157], 60.00th=[41157], 00:26:10.220 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:26:10.220 | 99.00th=[41681], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:26:10.220 | 99.99th=[42206] 00:26:10.220 bw ( KiB/s): min= 672, max= 768, per=66.27%, avg=759.58, stdev=25.78, samples=19 00:26:10.220 iops : min= 168, max= 192, avg=189.89, stdev= 6.45, samples=19 00:26:10.220 lat (usec) : 750=0.11%, 1000=49.53% 00:26:10.220 lat (msec) : 2=0.16%, 50=50.21% 00:26:10.220 cpu : usr=94.36%, sys=5.34%, ctx=15, majf=0, minf=172 00:26:10.220 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:10.220 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.220 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:10.220 issued rwts: total=1896,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:10.220 latency : target=0, window=0, percentile=100.00%, depth=4 00:26:10.220 00:26:10.220 Run status group 0 (all jobs): 00:26:10.220 READ: bw=1145KiB/s (1173kB/s), 387KiB/s-758KiB/s (396kB/s-776kB/s), io=11.2MiB (11.7MB), run=10002-10003msec 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.477 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 00:26:10.735 real 0m11.411s 00:26:10.735 user 0m20.249s 00:26:10.735 sys 0m1.324s 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 ************************************ 00:26:10.735 END TEST fio_dif_1_multi_subsystems 00:26:10.735 ************************************ 00:26:10.735 12:27:03 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:26:10.735 12:27:03 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:10.735 12:27:03 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 ************************************ 00:26:10.735 START TEST fio_dif_rand_params 00:26:10.735 ************************************ 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1125 -- # fio_dif_rand_params 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 bdev_null0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:10.735 [2024-07-26 12:27:03.818689] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:10.735 { 00:26:10.735 "params": { 00:26:10.735 "name": "Nvme$subsystem", 00:26:10.735 "trtype": "$TEST_TRANSPORT", 00:26:10.735 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:10.735 "adrfam": "ipv4", 00:26:10.735 "trsvcid": "$NVMF_PORT", 00:26:10.735 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:10.735 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:10.735 "hdgst": ${hdgst:-false}, 00:26:10.735 "ddgst": ${ddgst:-false} 00:26:10.735 }, 00:26:10.735 "method": "bdev_nvme_attach_controller" 00:26:10.735 } 00:26:10.735 EOF 00:26:10.735 )") 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:10.735 "params": { 00:26:10.735 "name": "Nvme0", 00:26:10.735 "trtype": "tcp", 00:26:10.735 "traddr": "10.0.0.2", 00:26:10.735 "adrfam": "ipv4", 00:26:10.735 "trsvcid": "4420", 00:26:10.735 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:10.735 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:10.735 "hdgst": false, 00:26:10.735 "ddgst": false 00:26:10.735 }, 00:26:10.735 "method": "bdev_nvme_attach_controller" 00:26:10.735 }' 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:10.735 12:27:03 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:10.995 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:10.995 ... 00:26:10.995 fio-3.35 00:26:10.995 Starting 3 threads 00:26:10.995 EAL: No free 2048 kB hugepages reported on node 1 00:26:17.564 00:26:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=2991516: Fri Jul 26 12:27:09 2024 00:26:17.564 read: IOPS=218, BW=27.3MiB/s (28.6MB/s)(137MiB/5003msec) 00:26:17.564 slat (nsec): min=5063, max=46412, avg=15638.86, stdev=5513.31 00:26:17.564 clat (usec): min=5712, max=90279, avg=13721.97, stdev=11677.89 00:26:17.564 lat (usec): min=5725, max=90292, avg=13737.61, stdev=11678.14 00:26:17.564 clat percentiles (usec): 00:26:17.564 | 1.00th=[ 6063], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 8094], 00:26:17.564 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10290], 60.00th=[11207], 00:26:17.564 | 70.00th=[12518], 80.00th=[13829], 90.00th=[16188], 95.00th=[50594], 00:26:17.564 | 99.00th=[53740], 99.50th=[55837], 99.90th=[89654], 99.95th=[90702], 00:26:17.564 | 99.99th=[90702] 00:26:17.564 bw ( KiB/s): min=25344, max=31488, per=37.72%, avg=27884.20, stdev=2140.59, samples=10 00:26:17.564 iops : min= 198, max= 246, avg=217.80, stdev=16.69, samples=10 00:26:17.564 lat (msec) : 10=45.33%, 20=46.89%, 50=1.92%, 100=5.86% 00:26:17.564 cpu : usr=91.30%, sys=6.42%, ctx=336, majf=0, minf=89 00:26:17.564 IO depths : 1=1.6%, 2=98.4%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.564 issued rwts: total=1092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.564 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=2991517: Fri Jul 26 12:27:09 2024 00:26:17.564 read: IOPS=181, BW=22.7MiB/s (23.8MB/s)(114MiB/5028msec) 00:26:17.564 slat (usec): min=4, max=101, avg=15.41, stdev= 5.60 00:26:17.564 clat (usec): min=4823, max=91772, avg=16515.40, stdev=15235.13 00:26:17.564 lat (usec): min=4835, max=91793, avg=16530.81, stdev=15235.50 00:26:17.564 clat percentiles (usec): 00:26:17.564 | 1.00th=[ 5473], 5.00th=[ 5866], 10.00th=[ 6456], 20.00th=[ 8291], 00:26:17.564 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10945], 60.00th=[12125], 00:26:17.564 | 70.00th=[13304], 80.00th=[14746], 90.00th=[50594], 95.00th=[53216], 00:26:17.564 | 99.00th=[55837], 99.50th=[56886], 99.90th=[91751], 99.95th=[91751], 00:26:17.564 | 99.99th=[91751] 00:26:17.564 bw ( KiB/s): min=13824, max=36864, per=31.48%, avg=23273.90, stdev=7175.58, samples=10 00:26:17.564 iops : min= 108, max= 288, avg=181.80, stdev=56.08, samples=10 00:26:17.564 lat (msec) : 10=41.34%, 20=44.08%, 50=3.84%, 100=10.75% 00:26:17.564 cpu : usr=94.41%, sys=5.17%, ctx=12, majf=0, minf=156 00:26:17.564 IO depths : 1=1.2%, 2=98.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.564 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.564 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.564 issued rwts: total=912,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.564 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.564 filename0: (groupid=0, jobs=1): err= 0: pid=2991518: Fri Jul 26 12:27:09 2024 00:26:17.564 read: IOPS=179, BW=22.5MiB/s (23.6MB/s)(113MiB/5002msec) 00:26:17.564 slat (nsec): min=5106, max=70380, avg=15109.63, stdev=4972.39 00:26:17.564 clat (usec): min=5667, max=96926, avg=16647.92, stdev=15447.00 00:26:17.564 lat (usec): min=5676, max=96940, avg=16663.03, stdev=15447.19 00:26:17.564 clat percentiles (usec): 00:26:17.564 | 1.00th=[ 5800], 5.00th=[ 6194], 10.00th=[ 7570], 20.00th=[ 8717], 00:26:17.565 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10814], 60.00th=[11994], 00:26:17.565 | 70.00th=[12911], 80.00th=[14222], 90.00th=[49546], 95.00th=[52167], 00:26:17.565 | 99.00th=[54789], 99.50th=[89654], 99.90th=[96994], 99.95th=[96994], 00:26:17.565 | 99.99th=[96994] 00:26:17.565 bw ( KiB/s): min=14080, max=29440, per=31.47%, avg=23267.56, stdev=4889.94, samples=9 00:26:17.565 iops : min= 110, max= 230, avg=181.78, stdev=38.20, samples=9 00:26:17.565 lat (msec) : 10=43.78%, 20=41.44%, 50=5.56%, 100=9.22% 00:26:17.565 cpu : usr=92.52%, sys=5.98%, ctx=352, majf=0, minf=121 00:26:17.565 IO depths : 1=0.9%, 2=99.1%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:17.565 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.565 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:17.565 issued rwts: total=900,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:17.565 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:17.565 00:26:17.565 Run status group 0 (all jobs): 00:26:17.565 READ: bw=72.2MiB/s (75.7MB/s), 22.5MiB/s-27.3MiB/s (23.6MB/s-28.6MB/s), io=363MiB (381MB), run=5002-5028msec 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 bdev_null0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 [2024-07-26 12:27:09.881468] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 bdev_null1 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.565 bdev_null2 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:26:17.565 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.566 { 00:26:17.566 "params": { 00:26:17.566 "name": "Nvme$subsystem", 00:26:17.566 "trtype": "$TEST_TRANSPORT", 00:26:17.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.566 "adrfam": "ipv4", 00:26:17.566 "trsvcid": "$NVMF_PORT", 00:26:17.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.566 "hdgst": ${hdgst:-false}, 00:26:17.566 "ddgst": ${ddgst:-false} 00:26:17.566 }, 00:26:17.566 "method": "bdev_nvme_attach_controller" 00:26:17.566 } 00:26:17.566 EOF 00:26:17.566 )") 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.566 { 00:26:17.566 "params": { 00:26:17.566 "name": "Nvme$subsystem", 00:26:17.566 "trtype": "$TEST_TRANSPORT", 00:26:17.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.566 "adrfam": "ipv4", 00:26:17.566 "trsvcid": "$NVMF_PORT", 00:26:17.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.566 "hdgst": ${hdgst:-false}, 00:26:17.566 "ddgst": ${ddgst:-false} 00:26:17.566 }, 00:26:17.566 "method": "bdev_nvme_attach_controller" 00:26:17.566 } 00:26:17.566 EOF 00:26:17.566 )") 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:17.566 { 00:26:17.566 "params": { 00:26:17.566 "name": "Nvme$subsystem", 00:26:17.566 "trtype": "$TEST_TRANSPORT", 00:26:17.566 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:17.566 "adrfam": "ipv4", 00:26:17.566 "trsvcid": "$NVMF_PORT", 00:26:17.566 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:17.566 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:17.566 "hdgst": ${hdgst:-false}, 00:26:17.566 "ddgst": ${ddgst:-false} 00:26:17.566 }, 00:26:17.566 "method": "bdev_nvme_attach_controller" 00:26:17.566 } 00:26:17.566 EOF 00:26:17.566 )") 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:17.566 12:27:09 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:17.566 "params": { 00:26:17.566 "name": "Nvme0", 00:26:17.566 "trtype": "tcp", 00:26:17.566 "traddr": "10.0.0.2", 00:26:17.566 "adrfam": "ipv4", 00:26:17.566 "trsvcid": "4420", 00:26:17.566 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:17.566 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:17.566 "hdgst": false, 00:26:17.566 "ddgst": false 00:26:17.566 }, 00:26:17.566 "method": "bdev_nvme_attach_controller" 00:26:17.567 },{ 00:26:17.567 "params": { 00:26:17.567 "name": "Nvme1", 00:26:17.567 "trtype": "tcp", 00:26:17.567 "traddr": "10.0.0.2", 00:26:17.567 "adrfam": "ipv4", 00:26:17.567 "trsvcid": "4420", 00:26:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:17.567 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:17.567 "hdgst": false, 00:26:17.567 "ddgst": false 00:26:17.567 }, 00:26:17.567 "method": "bdev_nvme_attach_controller" 00:26:17.567 },{ 00:26:17.567 "params": { 00:26:17.567 "name": "Nvme2", 00:26:17.567 "trtype": "tcp", 00:26:17.567 "traddr": "10.0.0.2", 00:26:17.567 "adrfam": "ipv4", 00:26:17.567 "trsvcid": "4420", 00:26:17.567 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:26:17.567 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:26:17.567 "hdgst": false, 00:26:17.567 "ddgst": false 00:26:17.567 }, 00:26:17.567 "method": "bdev_nvme_attach_controller" 00:26:17.567 }' 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:17.567 12:27:09 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:17.567 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.567 ... 00:26:17.567 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.567 ... 00:26:17.567 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:26:17.567 ... 00:26:17.567 fio-3.35 00:26:17.567 Starting 24 threads 00:26:17.567 EAL: No free 2048 kB hugepages reported on node 1 00:26:29.795 00:26:29.795 filename0: (groupid=0, jobs=1): err= 0: pid=2992779: Fri Jul 26 12:27:21 2024 00:26:29.795 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10009msec) 00:26:29.795 slat (usec): min=5, max=111, avg=33.71, stdev=20.05 00:26:29.795 clat (usec): min=15083, max=59194, avg=33628.54, stdev=2080.04 00:26:29.795 lat (usec): min=15120, max=59213, avg=33662.25, stdev=2077.81 00:26:29.795 clat percentiles (usec): 00:26:29.795 | 1.00th=[31851], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.795 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.795 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.795 | 99.00th=[36963], 99.50th=[43779], 99.90th=[58983], 99.95th=[58983], 00:26:29.795 | 99.99th=[58983] 00:26:29.795 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:26:29.795 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:26:29.795 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:29.795 cpu : usr=97.01%, sys=1.93%, ctx=143, majf=0, minf=46 00:26:29.795 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:29.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.795 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.795 filename0: (groupid=0, jobs=1): err= 0: pid=2992780: Fri Jul 26 12:27:21 2024 00:26:29.795 read: IOPS=472, BW=1891KiB/s (1937kB/s)(18.5MiB/10030msec) 00:26:29.795 slat (usec): min=6, max=206, avg=36.14, stdev=15.04 00:26:29.795 clat (usec): min=9696, max=49350, avg=33538.70, stdev=1694.06 00:26:29.795 lat (usec): min=9768, max=49380, avg=33574.84, stdev=1690.70 00:26:29.795 clat percentiles (usec): 00:26:29.795 | 1.00th=[28443], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.795 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.795 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.795 | 99.00th=[36963], 99.50th=[41681], 99.90th=[45876], 99.95th=[45876], 00:26:29.795 | 99.99th=[49546] 00:26:29.795 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1890.55, stdev=53.37, samples=20 00:26:29.795 iops : min= 448, max= 480, avg=472.60, stdev=13.38, samples=20 00:26:29.795 lat (msec) : 10=0.08%, 20=0.38%, 50=99.54% 00:26:29.795 cpu : usr=95.09%, sys=2.73%, ctx=245, majf=0, minf=30 00:26:29.795 IO depths : 1=6.1%, 2=12.2%, 4=24.6%, 8=50.6%, 16=6.5%, 32=0.0%, >=64=0.0% 00:26:29.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.795 complete : 0=0.0%, 4=94.0%, 8=0.2%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.795 issued rwts: total=4742,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.795 filename0: (groupid=0, jobs=1): err= 0: pid=2992781: Fri Jul 26 12:27:21 2024 00:26:29.795 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:26:29.795 slat (usec): min=8, max=109, avg=36.95, stdev=13.85 00:26:29.795 clat (usec): min=24773, max=46819, avg=33613.17, stdev=1165.60 00:26:29.795 lat (usec): min=24815, max=46849, avg=33650.12, stdev=1164.04 00:26:29.795 clat percentiles (usec): 00:26:29.795 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.795 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.795 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.795 | 99.00th=[36963], 99.50th=[38011], 99.90th=[46924], 99.95th=[46924], 00:26:29.795 | 99.99th=[46924] 00:26:29.795 bw ( KiB/s): min= 1784, max= 1920, per=4.16%, avg=1881.20, stdev=60.83, samples=20 00:26:29.795 iops : min= 446, max= 480, avg=470.30, stdev=15.21, samples=20 00:26:29.795 lat (msec) : 50=100.00% 00:26:29.795 cpu : usr=91.89%, sys=4.28%, ctx=191, majf=0, minf=26 00:26:29.795 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:29.795 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.795 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.795 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.795 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.795 filename0: (groupid=0, jobs=1): err= 0: pid=2992782: Fri Jul 26 12:27:21 2024 00:26:29.795 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:26:29.795 slat (usec): min=8, max=109, avg=34.02, stdev=18.79 00:26:29.795 clat (usec): min=26217, max=51681, avg=33654.73, stdev=1284.59 00:26:29.795 lat (usec): min=26261, max=51713, avg=33688.75, stdev=1281.12 00:26:29.795 clat percentiles (usec): 00:26:29.795 | 1.00th=[32375], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:26:29.795 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.795 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.795 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:26:29.795 | 99.99th=[51643] 00:26:29.795 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:26:29.795 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:26:29.795 lat (msec) : 50=99.66%, 100=0.34% 00:26:29.795 cpu : usr=97.68%, sys=1.91%, ctx=24, majf=0, minf=27 00:26:29.795 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename0: (groupid=0, jobs=1): err= 0: pid=2992783: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:26:29.796 slat (usec): min=6, max=124, avg=39.11, stdev=18.37 00:26:29.796 clat (usec): min=17401, max=59059, avg=33553.62, stdev=1467.46 00:26:29.796 lat (usec): min=17410, max=59102, avg=33592.73, stdev=1467.10 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:26:29.796 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.796 | 99.00th=[36963], 99.50th=[37487], 99.90th=[45876], 99.95th=[45876], 00:26:29.796 | 99.99th=[58983] 00:26:29.796 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:26:29.796 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:26:29.796 lat (msec) : 20=0.30%, 50=99.66%, 100=0.04% 00:26:29.796 cpu : usr=92.38%, sys=4.03%, ctx=233, majf=0, minf=37 00:26:29.796 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename0: (groupid=0, jobs=1): err= 0: pid=2992784: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10008msec) 00:26:29.796 slat (nsec): min=9507, max=74301, avg=32050.11, stdev=9287.44 00:26:29.796 clat (usec): min=13092, max=78204, avg=33634.16, stdev=2623.08 00:26:29.796 lat (usec): min=13112, max=78244, avg=33666.21, stdev=2623.37 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:29.796 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.796 | 99.00th=[36439], 99.50th=[37487], 99.90th=[69731], 99.95th=[69731], 00:26:29.796 | 99.99th=[78119] 00:26:29.796 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:26:29.796 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:26:29.796 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:29.796 cpu : usr=97.96%, sys=1.63%, ctx=13, majf=0, minf=44 00:26:29.796 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename0: (groupid=0, jobs=1): err= 0: pid=2992785: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:26:29.796 slat (nsec): min=9304, max=74441, avg=29117.38, stdev=8984.93 00:26:29.796 clat (usec): min=21920, max=48591, avg=33658.38, stdev=839.10 00:26:29.796 lat (usec): min=21934, max=48626, avg=33687.49, stdev=838.02 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:29.796 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.796 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:26:29.796 | 99.99th=[48497] 00:26:29.796 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1886.32, stdev=57.91, samples=19 00:26:29.796 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:26:29.796 lat (msec) : 50=100.00% 00:26:29.796 cpu : usr=98.11%, sys=1.47%, ctx=23, majf=0, minf=28 00:26:29.796 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename0: (groupid=0, jobs=1): err= 0: pid=2992786: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=471, BW=1885KiB/s (1930kB/s)(18.4MiB/10018msec) 00:26:29.796 slat (usec): min=10, max=122, avg=38.91, stdev=13.86 00:26:29.796 clat (usec): min=20397, max=55990, avg=33611.89, stdev=1617.95 00:26:29.796 lat (usec): min=20418, max=56021, avg=33650.80, stdev=1617.42 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.796 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.796 | 99.00th=[36963], 99.50th=[37487], 99.90th=[55837], 99.95th=[55837], 00:26:29.796 | 99.99th=[55837] 00:26:29.796 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:26:29.796 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:26:29.796 lat (msec) : 50=99.66%, 100=0.34% 00:26:29.796 cpu : usr=97.12%, sys=1.93%, ctx=261, majf=0, minf=56 00:26:29.796 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename1: (groupid=0, jobs=1): err= 0: pid=2992787: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.6MiB/10048msec) 00:26:29.796 slat (usec): min=17, max=166, avg=64.68, stdev=16.26 00:26:29.796 clat (usec): min=19690, max=61405, avg=33466.36, stdev=2594.19 00:26:29.796 lat (usec): min=19785, max=61471, avg=33531.03, stdev=2591.80 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[23200], 5.00th=[32375], 10.00th=[32900], 20.00th=[33162], 00:26:29.796 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.796 | 99.00th=[38011], 99.50th=[50070], 99.90th=[61080], 99.95th=[61080], 00:26:29.796 | 99.99th=[61604] 00:26:29.796 bw ( KiB/s): min= 1664, max= 2064, per=4.19%, avg=1895.10, stdev=74.24, samples=20 00:26:29.796 iops : min= 416, max= 516, avg=473.75, stdev=18.56, samples=20 00:26:29.796 lat (msec) : 20=0.15%, 50=99.41%, 100=0.44% 00:26:29.796 cpu : usr=98.14%, sys=1.41%, ctx=13, majf=0, minf=37 00:26:29.796 IO depths : 1=0.1%, 2=2.4%, 4=9.3%, 8=72.1%, 16=16.2%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=91.2%, 8=6.7%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4754,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename1: (groupid=0, jobs=1): err= 0: pid=2992788: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10019msec) 00:26:29.796 slat (usec): min=8, max=113, avg=37.96, stdev=14.90 00:26:29.796 clat (usec): min=20066, max=59043, avg=33621.79, stdev=1691.18 00:26:29.796 lat (usec): min=20123, max=59095, avg=33659.75, stdev=1690.24 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.796 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.796 | 99.00th=[36963], 99.50th=[47973], 99.90th=[51643], 99.95th=[51643], 00:26:29.796 | 99.99th=[58983] 00:26:29.796 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:26:29.796 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:26:29.796 lat (msec) : 50=99.62%, 100=0.38% 00:26:29.796 cpu : usr=97.41%, sys=1.93%, ctx=131, majf=0, minf=31 00:26:29.796 IO depths : 1=6.1%, 2=12.3%, 4=24.9%, 8=50.2%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:29.796 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.796 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.796 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.796 filename1: (groupid=0, jobs=1): err= 0: pid=2992789: Fri Jul 26 12:27:21 2024 00:26:29.796 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:26:29.796 slat (usec): min=8, max=210, avg=35.99, stdev=13.36 00:26:29.796 clat (usec): min=19706, max=46542, avg=33587.55, stdev=1088.11 00:26:29.796 lat (usec): min=19718, max=46562, avg=33623.54, stdev=1087.45 00:26:29.796 clat percentiles (usec): 00:26:29.796 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.796 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.796 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.796 | 99.00th=[36963], 99.50th=[37487], 99.90th=[45876], 99.95th=[46400], 00:26:29.796 | 99.99th=[46400] 00:26:29.796 bw ( KiB/s): min= 1792, max= 1936, per=4.17%, avg=1886.32, stdev=58.15, samples=19 00:26:29.796 iops : min= 448, max= 484, avg=471.58, stdev=14.54, samples=19 00:26:29.797 lat (msec) : 20=0.04%, 50=99.96% 00:26:29.797 cpu : usr=94.38%, sys=3.21%, ctx=148, majf=0, minf=40 00:26:29.797 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename1: (groupid=0, jobs=1): err= 0: pid=2992790: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=480, BW=1922KiB/s (1968kB/s)(18.8MiB/10007msec) 00:26:29.797 slat (nsec): min=8213, max=92892, avg=24235.80, stdev=17662.18 00:26:29.797 clat (usec): min=10400, max=88547, avg=33193.93, stdev=4622.95 00:26:29.797 lat (usec): min=10409, max=88583, avg=33218.16, stdev=4621.61 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[21890], 5.00th=[27132], 10.00th=[27657], 20.00th=[29230], 00:26:29.797 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39060], 95.00th=[40109], 00:26:29.797 | 99.00th=[41157], 99.50th=[45351], 99.90th=[68682], 99.95th=[68682], 00:26:29.797 | 99.99th=[88605] 00:26:29.797 bw ( KiB/s): min= 1680, max= 1984, per=4.24%, avg=1915.79, stdev=64.19, samples=19 00:26:29.797 iops : min= 420, max= 496, avg=478.95, stdev=16.05, samples=19 00:26:29.797 lat (msec) : 20=0.62%, 50=99.04%, 100=0.33% 00:26:29.797 cpu : usr=98.15%, sys=1.43%, ctx=13, majf=0, minf=39 00:26:29.797 IO depths : 1=0.1%, 2=0.2%, 4=2.6%, 8=80.8%, 16=16.4%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=89.0%, 8=9.2%, 16=1.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename1: (groupid=0, jobs=1): err= 0: pid=2992791: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:26:29.797 slat (usec): min=8, max=106, avg=26.74, stdev=10.55 00:26:29.797 clat (usec): min=28235, max=43542, avg=33703.96, stdev=883.94 00:26:29.797 lat (usec): min=28249, max=43576, avg=33730.70, stdev=884.02 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[32375], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:29.797 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.797 | 99.00th=[36963], 99.50th=[37487], 99.90th=[43254], 99.95th=[43254], 00:26:29.797 | 99.99th=[43779] 00:26:29.797 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:26:29.797 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:26:29.797 lat (msec) : 50=100.00% 00:26:29.797 cpu : usr=93.73%, sys=3.46%, ctx=129, majf=0, minf=39 00:26:29.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename1: (groupid=0, jobs=1): err= 0: pid=2992792: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=478, BW=1914KiB/s (1960kB/s)(18.7MiB/10008msec) 00:26:29.797 slat (usec): min=8, max=117, avg=30.13, stdev=21.99 00:26:29.797 clat (usec): min=8685, max=90328, avg=33295.02, stdev=5085.86 00:26:29.797 lat (usec): min=8694, max=90366, avg=33325.14, stdev=5085.21 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[19792], 5.00th=[26870], 10.00th=[27657], 20.00th=[29230], 00:26:29.797 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[34341], 90.00th=[39060], 95.00th=[40109], 00:26:29.797 | 99.00th=[50594], 99.50th=[56361], 99.90th=[69731], 99.95th=[69731], 00:26:29.797 | 99.99th=[90702] 00:26:29.797 bw ( KiB/s): min= 1712, max= 2032, per=4.22%, avg=1906.53, stdev=70.81, samples=19 00:26:29.797 iops : min= 428, max= 508, avg=476.63, stdev=17.70, samples=19 00:26:29.797 lat (msec) : 10=0.38%, 20=0.67%, 50=97.95%, 100=1.00% 00:26:29.797 cpu : usr=97.98%, sys=1.59%, ctx=12, majf=0, minf=46 00:26:29.797 IO depths : 1=0.1%, 2=0.6%, 4=4.3%, 8=79.0%, 16=16.1%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=89.4%, 8=8.5%, 16=2.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4790,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename1: (groupid=0, jobs=1): err= 0: pid=2992793: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=473, BW=1893KiB/s (1938kB/s)(18.5MiB/10009msec) 00:26:29.797 slat (usec): min=7, max=130, avg=34.59, stdev=22.79 00:26:29.797 clat (usec): min=11057, max=37981, avg=33504.62, stdev=1614.17 00:26:29.797 lat (usec): min=11094, max=38023, avg=33539.22, stdev=1610.64 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[31589], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:26:29.797 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.797 | 99.00th=[36439], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:26:29.797 | 99.99th=[38011] 00:26:29.797 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1888.00, stdev=56.87, samples=20 00:26:29.797 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:29.797 lat (msec) : 20=0.34%, 50=99.66% 00:26:29.797 cpu : usr=95.23%, sys=2.81%, ctx=137, majf=0, minf=46 00:26:29.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename1: (groupid=0, jobs=1): err= 0: pid=2992794: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=471, BW=1886KiB/s (1932kB/s)(18.4MiB/10008msec) 00:26:29.797 slat (nsec): min=10748, max=93318, avg=40851.60, stdev=15445.21 00:26:29.797 clat (usec): min=12351, max=78496, avg=33555.32, stdev=2664.10 00:26:29.797 lat (usec): min=12399, max=78531, avg=33596.17, stdev=2662.75 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[32113], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.797 | 30.00th=[33162], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.797 | 99.00th=[36439], 99.50th=[37487], 99.90th=[69731], 99.95th=[69731], 00:26:29.797 | 99.99th=[78119] 00:26:29.797 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:26:29.797 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:26:29.797 lat (msec) : 20=0.44%, 50=99.22%, 100=0.34% 00:26:29.797 cpu : usr=98.19%, sys=1.39%, ctx=23, majf=0, minf=46 00:26:29.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename2: (groupid=0, jobs=1): err= 0: pid=2992795: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10005msec) 00:26:29.797 slat (nsec): min=8457, max=67170, avg=26395.37, stdev=11327.61 00:26:29.797 clat (usec): min=21682, max=45999, avg=33688.84, stdev=815.86 00:26:29.797 lat (usec): min=21707, max=46024, avg=33715.24, stdev=814.45 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33424], 00:26:29.797 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.797 | 99.00th=[36963], 99.50th=[37487], 99.90th=[38011], 99.95th=[38011], 00:26:29.797 | 99.99th=[45876] 00:26:29.797 bw ( KiB/s): min= 1792, max= 1920, per=4.17%, avg=1886.32, stdev=57.91, samples=19 00:26:29.797 iops : min= 448, max= 480, avg=471.58, stdev=14.48, samples=19 00:26:29.797 lat (msec) : 50=100.00% 00:26:29.797 cpu : usr=97.99%, sys=1.57%, ctx=29, majf=0, minf=51 00:26:29.797 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.797 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.797 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.797 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.797 filename2: (groupid=0, jobs=1): err= 0: pid=2992796: Fri Jul 26 12:27:21 2024 00:26:29.797 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10010msec) 00:26:29.797 slat (usec): min=5, max=131, avg=28.09, stdev=14.02 00:26:29.797 clat (usec): min=15730, max=59913, avg=33676.40, stdev=1996.68 00:26:29.797 lat (usec): min=15739, max=59927, avg=33704.49, stdev=1996.08 00:26:29.797 clat percentiles (usec): 00:26:29.797 | 1.00th=[32113], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:29.797 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.797 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.797 | 99.00th=[36963], 99.50th=[38011], 99.90th=[60031], 99.95th=[60031], 00:26:29.797 | 99.99th=[60031] 00:26:29.797 bw ( KiB/s): min= 1667, max= 1920, per=4.16%, avg=1879.74, stdev=74.07, samples=19 00:26:29.797 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:26:29.797 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:29.798 cpu : usr=96.32%, sys=2.28%, ctx=227, majf=0, minf=40 00:26:29.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 filename2: (groupid=0, jobs=1): err= 0: pid=2992797: Fri Jul 26 12:27:21 2024 00:26:29.798 read: IOPS=471, BW=1884KiB/s (1929kB/s)(18.4MiB/10020msec) 00:26:29.798 slat (nsec): min=8472, max=88191, avg=37353.60, stdev=13710.49 00:26:29.798 clat (usec): min=21555, max=57150, avg=33630.35, stdev=1319.86 00:26:29.798 lat (usec): min=21575, max=57170, avg=33667.70, stdev=1318.33 00:26:29.798 clat percentiles (usec): 00:26:29.798 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.798 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.798 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.798 | 99.00th=[36963], 99.50th=[37487], 99.90th=[51643], 99.95th=[51643], 00:26:29.798 | 99.99th=[57410] 00:26:29.798 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1881.60, stdev=73.12, samples=20 00:26:29.798 iops : min= 416, max= 480, avg=470.40, stdev=18.28, samples=20 00:26:29.798 lat (msec) : 50=99.66%, 100=0.34% 00:26:29.798 cpu : usr=97.88%, sys=1.71%, ctx=14, majf=0, minf=30 00:26:29.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 filename2: (groupid=0, jobs=1): err= 0: pid=2992798: Fri Jul 26 12:27:21 2024 00:26:29.798 read: IOPS=471, BW=1886KiB/s (1931kB/s)(18.4MiB/10013msec) 00:26:29.798 slat (usec): min=8, max=145, avg=61.79, stdev=18.64 00:26:29.798 clat (usec): min=27613, max=43550, avg=33434.04, stdev=940.58 00:26:29.798 lat (usec): min=27694, max=43571, avg=33495.83, stdev=935.45 00:26:29.798 clat percentiles (usec): 00:26:29.798 | 1.00th=[32113], 5.00th=[32637], 10.00th=[32637], 20.00th=[32900], 00:26:29.798 | 30.00th=[33162], 40.00th=[33162], 50.00th=[33424], 60.00th=[33424], 00:26:29.798 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.798 | 99.00th=[36963], 99.50th=[37487], 99.90th=[43254], 99.95th=[43779], 00:26:29.798 | 99.99th=[43779] 00:26:29.798 bw ( KiB/s): min= 1792, max= 1920, per=4.16%, avg=1881.60, stdev=60.18, samples=20 00:26:29.798 iops : min= 448, max= 480, avg=470.40, stdev=15.05, samples=20 00:26:29.798 lat (msec) : 50=100.00% 00:26:29.798 cpu : usr=98.15%, sys=1.40%, ctx=12, majf=0, minf=32 00:26:29.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 filename2: (groupid=0, jobs=1): err= 0: pid=2992799: Fri Jul 26 12:27:21 2024 00:26:29.798 read: IOPS=474, BW=1900KiB/s (1945kB/s)(18.6MiB/10019msec) 00:26:29.798 slat (nsec): min=8191, max=99329, avg=34161.04, stdev=17383.34 00:26:29.798 clat (usec): min=11143, max=42179, avg=33407.94, stdev=1780.25 00:26:29.798 lat (usec): min=11191, max=42225, avg=33442.10, stdev=1778.33 00:26:29.798 clat percentiles (usec): 00:26:29.798 | 1.00th=[25297], 5.00th=[32637], 10.00th=[32900], 20.00th=[33162], 00:26:29.798 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.798 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.798 | 99.00th=[35390], 99.50th=[35914], 99.90th=[42206], 99.95th=[42206], 00:26:29.798 | 99.99th=[42206] 00:26:29.798 bw ( KiB/s): min= 1792, max= 2080, per=4.19%, avg=1896.80, stdev=69.93, samples=20 00:26:29.798 iops : min= 448, max= 520, avg=474.20, stdev=17.48, samples=20 00:26:29.798 lat (msec) : 20=0.29%, 50=99.71% 00:26:29.798 cpu : usr=98.12%, sys=1.47%, ctx=15, majf=0, minf=42 00:26:29.798 IO depths : 1=5.9%, 2=12.0%, 4=24.7%, 8=50.8%, 16=6.6%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=94.0%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4758,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 filename2: (groupid=0, jobs=1): err= 0: pid=2992800: Fri Jul 26 12:27:21 2024 00:26:29.798 read: IOPS=471, BW=1884KiB/s (1930kB/s)(18.4MiB/10007msec) 00:26:29.798 slat (nsec): min=8330, max=85545, avg=34834.56, stdev=12728.59 00:26:29.798 clat (usec): min=13175, max=89199, avg=33733.91, stdev=2889.19 00:26:29.798 lat (usec): min=13189, max=89235, avg=33768.75, stdev=2889.69 00:26:29.798 clat percentiles (usec): 00:26:29.798 | 1.00th=[26870], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:29.798 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.798 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.798 | 99.00th=[38011], 99.50th=[50594], 99.90th=[68682], 99.95th=[68682], 00:26:29.798 | 99.99th=[89654] 00:26:29.798 bw ( KiB/s): min= 1632, max= 1920, per=4.15%, avg=1877.05, stdev=72.75, samples=19 00:26:29.798 iops : min= 408, max= 480, avg=469.26, stdev=18.19, samples=19 00:26:29.798 lat (msec) : 20=0.34%, 50=99.15%, 100=0.51% 00:26:29.798 cpu : usr=97.98%, sys=1.60%, ctx=24, majf=0, minf=30 00:26:29.798 IO depths : 1=0.1%, 2=5.0%, 4=20.0%, 8=61.3%, 16=13.6%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=93.4%, 8=2.2%, 16=4.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4714,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 filename2: (groupid=0, jobs=1): err= 0: pid=2992801: Fri Jul 26 12:27:21 2024 00:26:29.798 read: IOPS=472, BW=1890KiB/s (1935kB/s)(18.5MiB/10023msec) 00:26:29.798 slat (usec): min=6, max=265, avg=26.22, stdev=14.22 00:26:29.798 clat (usec): min=16791, max=40884, avg=33637.60, stdev=1209.57 00:26:29.798 lat (usec): min=17027, max=40920, avg=33663.82, stdev=1201.48 00:26:29.798 clat percentiles (usec): 00:26:29.798 | 1.00th=[32637], 5.00th=[32900], 10.00th=[33162], 20.00th=[33162], 00:26:29.798 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33817], 60.00th=[33817], 00:26:29.798 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34866], 00:26:29.798 | 99.00th=[36963], 99.50th=[36963], 99.90th=[37487], 99.95th=[38011], 00:26:29.798 | 99.99th=[40633] 00:26:29.798 bw ( KiB/s): min= 1792, max= 1920, per=4.18%, avg=1888.00, stdev=56.87, samples=20 00:26:29.798 iops : min= 448, max= 480, avg=472.00, stdev=14.22, samples=20 00:26:29.798 lat (msec) : 20=0.34%, 50=99.66% 00:26:29.798 cpu : usr=94.55%, sys=3.09%, ctx=160, majf=0, minf=59 00:26:29.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4736,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 filename2: (groupid=0, jobs=1): err= 0: pid=2992802: Fri Jul 26 12:27:21 2024 00:26:29.798 read: IOPS=471, BW=1887KiB/s (1932kB/s)(18.4MiB/10007msec) 00:26:29.798 slat (nsec): min=8769, max=83823, avg=34928.11, stdev=11657.61 00:26:29.798 clat (usec): min=13101, max=69060, avg=33601.06, stdev=2558.45 00:26:29.798 lat (usec): min=13121, max=69092, avg=33635.99, stdev=2559.09 00:26:29.798 clat percentiles (usec): 00:26:29.798 | 1.00th=[32375], 5.00th=[32900], 10.00th=[32900], 20.00th=[33162], 00:26:29.798 | 30.00th=[33424], 40.00th=[33424], 50.00th=[33424], 60.00th=[33817], 00:26:29.798 | 70.00th=[33817], 80.00th=[33817], 90.00th=[34341], 95.00th=[34341], 00:26:29.798 | 99.00th=[36963], 99.50th=[37487], 99.90th=[68682], 99.95th=[68682], 00:26:29.798 | 99.99th=[68682] 00:26:29.798 bw ( KiB/s): min= 1664, max= 1920, per=4.16%, avg=1879.58, stdev=74.55, samples=19 00:26:29.798 iops : min= 416, max= 480, avg=469.89, stdev=18.64, samples=19 00:26:29.798 lat (msec) : 20=0.34%, 50=99.32%, 100=0.34% 00:26:29.798 cpu : usr=98.08%, sys=1.50%, ctx=62, majf=0, minf=35 00:26:29.798 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:26:29.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:29.798 issued rwts: total=4720,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:29.798 latency : target=0, window=0, percentile=100.00%, depth=16 00:26:29.798 00:26:29.798 Run status group 0 (all jobs): 00:26:29.798 READ: bw=44.1MiB/s (46.3MB/s), 1884KiB/s-1922KiB/s (1929kB/s-1968kB/s), io=444MiB (465MB), run=10005-10048msec 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:29.798 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 bdev_null0 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 [2024-07-26 12:27:21.495886] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 bdev_null1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.799 { 00:26:29.799 "params": { 00:26:29.799 "name": "Nvme$subsystem", 00:26:29.799 "trtype": "$TEST_TRANSPORT", 00:26:29.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.799 "adrfam": "ipv4", 00:26:29.799 "trsvcid": "$NVMF_PORT", 00:26:29.799 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.799 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.799 "hdgst": ${hdgst:-false}, 00:26:29.799 "ddgst": ${ddgst:-false} 00:26:29.799 }, 00:26:29.799 "method": "bdev_nvme_attach_controller" 00:26:29.799 } 00:26:29.799 EOF 00:26:29.799 )") 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:29.799 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:29.799 { 00:26:29.799 "params": { 00:26:29.799 "name": "Nvme$subsystem", 00:26:29.799 "trtype": "$TEST_TRANSPORT", 00:26:29.799 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:29.799 "adrfam": "ipv4", 00:26:29.799 "trsvcid": "$NVMF_PORT", 00:26:29.800 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:29.800 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:29.800 "hdgst": ${hdgst:-false}, 00:26:29.800 "ddgst": ${ddgst:-false} 00:26:29.800 }, 00:26:29.800 "method": "bdev_nvme_attach_controller" 00:26:29.800 } 00:26:29.800 EOF 00:26:29.800 )") 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:29.800 "params": { 00:26:29.800 "name": "Nvme0", 00:26:29.800 "trtype": "tcp", 00:26:29.800 "traddr": "10.0.0.2", 00:26:29.800 "adrfam": "ipv4", 00:26:29.800 "trsvcid": "4420", 00:26:29.800 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:29.800 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:29.800 "hdgst": false, 00:26:29.800 "ddgst": false 00:26:29.800 }, 00:26:29.800 "method": "bdev_nvme_attach_controller" 00:26:29.800 },{ 00:26:29.800 "params": { 00:26:29.800 "name": "Nvme1", 00:26:29.800 "trtype": "tcp", 00:26:29.800 "traddr": "10.0.0.2", 00:26:29.800 "adrfam": "ipv4", 00:26:29.800 "trsvcid": "4420", 00:26:29.800 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:29.800 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:29.800 "hdgst": false, 00:26:29.800 "ddgst": false 00:26:29.800 }, 00:26:29.800 "method": "bdev_nvme_attach_controller" 00:26:29.800 }' 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:29.800 12:27:21 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:29.800 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:29.800 ... 00:26:29.800 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:26:29.800 ... 00:26:29.800 fio-3.35 00:26:29.800 Starting 4 threads 00:26:29.800 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.058 00:26:35.058 filename0: (groupid=0, jobs=1): err= 0: pid=2994181: Fri Jul 26 12:27:27 2024 00:26:35.058 read: IOPS=1853, BW=14.5MiB/s (15.2MB/s)(72.5MiB/5005msec) 00:26:35.058 slat (nsec): min=5650, max=67875, avg=17485.26, stdev=10037.81 00:26:35.058 clat (usec): min=933, max=8250, avg=4257.15, stdev=582.31 00:26:35.058 lat (usec): min=946, max=8263, avg=4274.64, stdev=582.18 00:26:35.058 clat percentiles (usec): 00:26:35.058 | 1.00th=[ 2835], 5.00th=[ 3458], 10.00th=[ 3752], 20.00th=[ 3982], 00:26:35.058 | 30.00th=[ 4080], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:35.058 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4817], 95.00th=[ 5342], 00:26:35.058 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7635], 00:26:35.058 | 99.99th=[ 8225] 00:26:35.058 bw ( KiB/s): min=14192, max=15392, per=24.91%, avg=14833.60, stdev=368.15, samples=10 00:26:35.058 iops : min= 1774, max= 1924, avg=1854.20, stdev=46.02, samples=10 00:26:35.058 lat (usec) : 1000=0.01% 00:26:35.058 lat (msec) : 2=0.28%, 4=21.50%, 10=78.21% 00:26:35.058 cpu : usr=94.08%, sys=5.42%, ctx=10, majf=0, minf=35 00:26:35.058 IO depths : 1=0.2%, 2=9.4%, 4=63.2%, 8=27.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 complete : 0=0.0%, 4=92.1%, 8=7.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 issued rwts: total=9279,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.058 filename0: (groupid=0, jobs=1): err= 0: pid=2994182: Fri Jul 26 12:27:27 2024 00:26:35.058 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5002msec) 00:26:35.058 slat (usec): min=5, max=136, avg=18.38, stdev=10.54 00:26:35.058 clat (usec): min=844, max=7559, avg=4241.86, stdev=568.21 00:26:35.058 lat (usec): min=864, max=7586, avg=4260.24, stdev=568.24 00:26:35.058 clat percentiles (usec): 00:26:35.058 | 1.00th=[ 2671], 5.00th=[ 3392], 10.00th=[ 3752], 20.00th=[ 3982], 00:26:35.058 | 30.00th=[ 4113], 40.00th=[ 4178], 50.00th=[ 4228], 60.00th=[ 4293], 00:26:35.058 | 70.00th=[ 4359], 80.00th=[ 4424], 90.00th=[ 4752], 95.00th=[ 5211], 00:26:35.058 | 99.00th=[ 6325], 99.50th=[ 6718], 99.90th=[ 7111], 99.95th=[ 7308], 00:26:35.058 | 99.99th=[ 7570] 00:26:35.058 bw ( KiB/s): min=14592, max=15200, per=24.97%, avg=14868.60, stdev=194.53, samples=10 00:26:35.058 iops : min= 1824, max= 1900, avg=1858.50, stdev=24.22, samples=10 00:26:35.058 lat (usec) : 1000=0.05% 00:26:35.058 lat (msec) : 2=0.28%, 4=19.96%, 10=79.71% 00:26:35.058 cpu : usr=94.90%, sys=4.36%, ctx=97, majf=0, minf=32 00:26:35.058 IO depths : 1=0.2%, 2=10.5%, 4=61.7%, 8=27.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 complete : 0=0.0%, 4=92.5%, 8=7.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 issued rwts: total=9298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.058 filename1: (groupid=0, jobs=1): err= 0: pid=2994183: Fri Jul 26 12:27:27 2024 00:26:35.058 read: IOPS=1874, BW=14.6MiB/s (15.4MB/s)(73.3MiB/5004msec) 00:26:35.058 slat (nsec): min=5405, max=70650, avg=16781.49, stdev=9931.93 00:26:35.058 clat (usec): min=1035, max=7677, avg=4210.75, stdev=569.74 00:26:35.058 lat (usec): min=1054, max=7733, avg=4227.54, stdev=569.90 00:26:35.058 clat percentiles (usec): 00:26:35.058 | 1.00th=[ 2802], 5.00th=[ 3425], 10.00th=[ 3654], 20.00th=[ 3949], 00:26:35.058 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:26:35.058 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4555], 95.00th=[ 5276], 00:26:35.058 | 99.00th=[ 6456], 99.50th=[ 6652], 99.90th=[ 7373], 99.95th=[ 7504], 00:26:35.058 | 99.99th=[ 7701] 00:26:35.058 bw ( KiB/s): min=14512, max=15424, per=25.19%, avg=15001.40, stdev=324.00, samples=10 00:26:35.058 iops : min= 1814, max= 1928, avg=1875.10, stdev=40.44, samples=10 00:26:35.058 lat (msec) : 2=0.17%, 4=23.72%, 10=76.11% 00:26:35.058 cpu : usr=94.00%, sys=5.52%, ctx=15, majf=0, minf=76 00:26:35.058 IO depths : 1=0.1%, 2=9.0%, 4=63.8%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 issued rwts: total=9382,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.058 filename1: (groupid=0, jobs=1): err= 0: pid=2994184: Fri Jul 26 12:27:27 2024 00:26:35.058 read: IOPS=1858, BW=14.5MiB/s (15.2MB/s)(72.6MiB/5003msec) 00:26:35.058 slat (nsec): min=5821, max=68068, avg=18195.90, stdev=10626.38 00:26:35.058 clat (usec): min=890, max=7819, avg=4241.72, stdev=558.11 00:26:35.058 lat (usec): min=898, max=7839, avg=4259.91, stdev=558.18 00:26:35.058 clat percentiles (usec): 00:26:35.058 | 1.00th=[ 2868], 5.00th=[ 3589], 10.00th=[ 3785], 20.00th=[ 3982], 00:26:35.058 | 30.00th=[ 4080], 40.00th=[ 4146], 50.00th=[ 4228], 60.00th=[ 4228], 00:26:35.058 | 70.00th=[ 4293], 80.00th=[ 4424], 90.00th=[ 4621], 95.00th=[ 5145], 00:26:35.058 | 99.00th=[ 6456], 99.50th=[ 6849], 99.90th=[ 7373], 99.95th=[ 7635], 00:26:35.058 | 99.99th=[ 7832] 00:26:35.058 bw ( KiB/s): min=14480, max=15472, per=24.97%, avg=14868.80, stdev=315.98, samples=10 00:26:35.058 iops : min= 1810, max= 1934, avg=1858.60, stdev=39.50, samples=10 00:26:35.058 lat (usec) : 1000=0.03% 00:26:35.058 lat (msec) : 2=0.28%, 4=21.75%, 10=77.94% 00:26:35.058 cpu : usr=94.68%, sys=4.58%, ctx=112, majf=0, minf=34 00:26:35.058 IO depths : 1=0.5%, 2=10.3%, 4=62.7%, 8=26.4%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:35.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 complete : 0=0.0%, 4=91.7%, 8=8.3%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:35.058 issued rwts: total=9298,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:35.058 latency : target=0, window=0, percentile=100.00%, depth=8 00:26:35.058 00:26:35.058 Run status group 0 (all jobs): 00:26:35.058 READ: bw=58.2MiB/s (61.0MB/s), 14.5MiB/s-14.6MiB/s (15.2MB/s-15.4MB/s), io=291MiB (305MB), run=5002-5005msec 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.058 00:26:35.058 real 0m23.942s 00:26:35.058 user 4m29.612s 00:26:35.058 sys 0m8.016s 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:35.058 12:27:27 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:26:35.058 ************************************ 00:26:35.059 END TEST fio_dif_rand_params 00:26:35.059 ************************************ 00:26:35.059 12:27:27 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:26:35.059 12:27:27 nvmf_dif -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:35.059 12:27:27 nvmf_dif -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:35.059 12:27:27 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:35.059 ************************************ 00:26:35.059 START TEST fio_dif_digest 00:26:35.059 ************************************ 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1125 -- # fio_dif_digest 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.059 bdev_null0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:35.059 [2024-07-26 12:27:27.813794] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:35.059 { 00:26:35.059 "params": { 00:26:35.059 "name": "Nvme$subsystem", 00:26:35.059 "trtype": "$TEST_TRANSPORT", 00:26:35.059 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:35.059 "adrfam": "ipv4", 00:26:35.059 "trsvcid": "$NVMF_PORT", 00:26:35.059 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:35.059 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:35.059 "hdgst": ${hdgst:-false}, 00:26:35.059 "ddgst": ${ddgst:-false} 00:26:35.059 }, 00:26:35.059 "method": "bdev_nvme_attach_controller" 00:26:35.059 } 00:26:35.059 EOF 00:26:35.059 )") 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:35.059 "params": { 00:26:35.059 "name": "Nvme0", 00:26:35.059 "trtype": "tcp", 00:26:35.059 "traddr": "10.0.0.2", 00:26:35.059 "adrfam": "ipv4", 00:26:35.059 "trsvcid": "4420", 00:26:35.059 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:26:35.059 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:26:35.059 "hdgst": true, 00:26:35.059 "ddgst": true 00:26:35.059 }, 00:26:35.059 "method": "bdev_nvme_attach_controller" 00:26:35.059 }' 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:26:35.059 12:27:27 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:26:35.059 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:26:35.059 ... 00:26:35.059 fio-3.35 00:26:35.059 Starting 3 threads 00:26:35.059 EAL: No free 2048 kB hugepages reported on node 1 00:26:47.258 00:26:47.258 filename0: (groupid=0, jobs=1): err= 0: pid=2994975: Fri Jul 26 12:27:38 2024 00:26:47.258 read: IOPS=191, BW=24.0MiB/s (25.1MB/s)(241MiB/10047msec) 00:26:47.259 slat (nsec): min=6334, max=98223, avg=12978.96, stdev=3005.86 00:26:47.259 clat (usec): min=9301, max=56008, avg=15601.41, stdev=2814.52 00:26:47.259 lat (usec): min=9314, max=56021, avg=15614.39, stdev=2814.61 00:26:47.259 clat percentiles (usec): 00:26:47.259 | 1.00th=[10683], 5.00th=[13435], 10.00th=[13960], 20.00th=[14484], 00:26:47.259 | 30.00th=[14877], 40.00th=[15270], 50.00th=[15533], 60.00th=[15795], 00:26:47.259 | 70.00th=[16057], 80.00th=[16450], 90.00th=[16909], 95.00th=[17433], 00:26:47.259 | 99.00th=[18220], 99.50th=[19530], 99.90th=[55837], 99.95th=[55837], 00:26:47.259 | 99.99th=[55837] 00:26:47.259 bw ( KiB/s): min=22272, max=26164, per=33.17%, avg=24642.60, stdev=958.09, samples=20 00:26:47.259 iops : min= 174, max= 204, avg=192.50, stdev= 7.45, samples=20 00:26:47.259 lat (msec) : 10=0.36%, 20=99.22%, 50=0.05%, 100=0.36% 00:26:47.259 cpu : usr=91.84%, sys=7.71%, ctx=33, majf=0, minf=160 00:26:47.259 IO depths : 1=0.3%, 2=99.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.259 issued rwts: total=1927,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.259 filename0: (groupid=0, jobs=1): err= 0: pid=2994976: Fri Jul 26 12:27:38 2024 00:26:47.259 read: IOPS=191, BW=23.9MiB/s (25.1MB/s)(240MiB/10047msec) 00:26:47.259 slat (nsec): min=6004, max=39307, avg=13318.74, stdev=2662.68 00:26:47.259 clat (usec): min=8957, max=61208, avg=15642.84, stdev=2452.56 00:26:47.259 lat (usec): min=8970, max=61221, avg=15656.16, stdev=2452.66 00:26:47.259 clat percentiles (usec): 00:26:47.259 | 1.00th=[10814], 5.00th=[13435], 10.00th=[13960], 20.00th=[14615], 00:26:47.259 | 30.00th=[15008], 40.00th=[15270], 50.00th=[15664], 60.00th=[15926], 00:26:47.259 | 70.00th=[16188], 80.00th=[16581], 90.00th=[17171], 95.00th=[17695], 00:26:47.259 | 99.00th=[18482], 99.50th=[19268], 99.90th=[60031], 99.95th=[61080], 00:26:47.259 | 99.99th=[61080] 00:26:47.259 bw ( KiB/s): min=22016, max=25600, per=33.08%, avg=24576.00, stdev=838.84, samples=20 00:26:47.259 iops : min= 172, max= 200, avg=192.00, stdev= 6.55, samples=20 00:26:47.259 lat (msec) : 10=0.21%, 20=99.48%, 50=0.16%, 100=0.16% 00:26:47.259 cpu : usr=91.75%, sys=7.75%, ctx=43, majf=0, minf=121 00:26:47.259 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.259 issued rwts: total=1922,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.259 filename0: (groupid=0, jobs=1): err= 0: pid=2994977: Fri Jul 26 12:27:38 2024 00:26:47.259 read: IOPS=197, BW=24.7MiB/s (25.9MB/s)(248MiB/10048msec) 00:26:47.259 slat (nsec): min=6582, max=48761, avg=12935.54, stdev=2169.15 00:26:47.259 clat (usec): min=8755, max=56139, avg=15163.49, stdev=2781.28 00:26:47.259 lat (usec): min=8767, max=56152, avg=15176.43, stdev=2781.31 00:26:47.259 clat percentiles (usec): 00:26:47.259 | 1.00th=[11338], 5.00th=[13042], 10.00th=[13566], 20.00th=[14091], 00:26:47.259 | 30.00th=[14484], 40.00th=[14746], 50.00th=[15139], 60.00th=[15401], 00:26:47.259 | 70.00th=[15664], 80.00th=[15926], 90.00th=[16450], 95.00th=[16909], 00:26:47.259 | 99.00th=[17957], 99.50th=[19268], 99.90th=[55837], 99.95th=[56361], 00:26:47.259 | 99.99th=[56361] 00:26:47.259 bw ( KiB/s): min=23296, max=26624, per=34.13%, avg=25356.80, stdev=865.05, samples=20 00:26:47.259 iops : min= 182, max= 208, avg=198.10, stdev= 6.76, samples=20 00:26:47.259 lat (msec) : 10=0.15%, 20=99.45%, 50=0.05%, 100=0.35% 00:26:47.259 cpu : usr=91.95%, sys=7.60%, ctx=23, majf=0, minf=62 00:26:47.259 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:26:47.259 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.259 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:26:47.259 issued rwts: total=1983,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:26:47.259 latency : target=0, window=0, percentile=100.00%, depth=3 00:26:47.259 00:26:47.259 Run status group 0 (all jobs): 00:26:47.259 READ: bw=72.6MiB/s (76.1MB/s), 23.9MiB/s-24.7MiB/s (25.1MB/s-25.9MB/s), io=729MiB (764MB), run=10047-10048msec 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:47.259 00:26:47.259 real 0m11.101s 00:26:47.259 user 0m28.768s 00:26:47.259 sys 0m2.565s 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:47.259 12:27:38 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:26:47.259 ************************************ 00:26:47.259 END TEST fio_dif_digest 00:26:47.259 ************************************ 00:26:47.259 12:27:38 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:26:47.259 12:27:38 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.259 rmmod nvme_tcp 00:26:47.259 rmmod nvme_fabrics 00:26:47.259 rmmod nvme_keyring 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 2988270 ']' 00:26:47.259 12:27:38 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 2988270 00:26:47.259 12:27:38 nvmf_dif -- common/autotest_common.sh@950 -- # '[' -z 2988270 ']' 00:26:47.259 12:27:38 nvmf_dif -- common/autotest_common.sh@954 -- # kill -0 2988270 00:26:47.259 12:27:38 nvmf_dif -- common/autotest_common.sh@955 -- # uname 00:26:47.259 12:27:38 nvmf_dif -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:26:47.259 12:27:38 nvmf_dif -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2988270 00:26:47.259 12:27:39 nvmf_dif -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:26:47.259 12:27:39 nvmf_dif -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:26:47.259 12:27:39 nvmf_dif -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2988270' 00:26:47.259 killing process with pid 2988270 00:26:47.259 12:27:39 nvmf_dif -- common/autotest_common.sh@969 -- # kill 2988270 00:26:47.259 12:27:39 nvmf_dif -- common/autotest_common.sh@974 -- # wait 2988270 00:26:47.259 12:27:39 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:26:47.259 12:27:39 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:26:47.259 Waiting for block devices as requested 00:26:47.259 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:26:47.259 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:47.517 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:47.517 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:47.517 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:47.776 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:47.776 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:47.776 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:47.776 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:47.776 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:26:48.034 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:26:48.034 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:26:48.034 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:26:48.292 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:26:48.292 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:26:48.292 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:26:48.292 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:26:48.550 12:27:41 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:48.550 12:27:41 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:48.550 12:27:41 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:48.550 12:27:41 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:48.550 12:27:41 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:48.550 12:27:41 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:48.550 12:27:41 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.450 12:27:43 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:50.450 00:26:50.450 real 1m6.480s 00:26:50.450 user 6m25.499s 00:26:50.450 sys 0m20.063s 00:26:50.450 12:27:43 nvmf_dif -- common/autotest_common.sh@1126 -- # xtrace_disable 00:26:50.450 12:27:43 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:26:50.450 ************************************ 00:26:50.450 END TEST nvmf_dif 00:26:50.450 ************************************ 00:26:50.710 12:27:43 -- spdk/autotest.sh@297 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:50.710 12:27:43 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:50.710 12:27:43 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:50.710 12:27:43 -- common/autotest_common.sh@10 -- # set +x 00:26:50.710 ************************************ 00:26:50.710 START TEST nvmf_abort_qd_sizes 00:26:50.710 ************************************ 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:26:50.710 * Looking for test storage... 00:26:50.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:50.710 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:50.711 12:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:26:50.711 12:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:50.711 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:50.711 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:50.711 12:27:43 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:26:50.711 12:27:43 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.0 (0x8086 - 0x159b)' 00:26:52.644 Found 0000:0a:00.0 (0x8086 - 0x159b) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:0a:00.1 (0x8086 - 0x159b)' 00:26:52.644 Found 0000:0a:00.1 (0x8086 - 0x159b) 00:26:52.644 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.0: cvl_0_0' 00:26:52.645 Found net devices under 0000:0a:00.0: cvl_0_0 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:0a:00.1: cvl_0_1' 00:26:52.645 Found net devices under 0000:0a:00.1: cvl_0_1 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:52.645 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:52.645 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.231 ms 00:26:52.645 00:26:52.645 --- 10.0.0.2 ping statistics --- 00:26:52.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.645 rtt min/avg/max/mdev = 0.231/0.231/0.231/0.000 ms 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:52.645 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:52.645 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.197 ms 00:26:52.645 00:26:52.645 --- 10.0.0.1 ping statistics --- 00:26:52.645 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:52.645 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:26:52.645 12:27:45 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:53.581 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:53.581 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:53.581 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:53.581 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:53.581 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:53.581 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:53.839 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:53.839 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:26:53.839 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:26:54.773 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:26:54.773 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@724 -- # xtrace_disable 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=2999864 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 2999864 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@831 -- # '[' -z 2999864 ']' 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # local max_retries=100 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:54.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # xtrace_disable 00:26:54.774 12:27:47 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:54.774 [2024-07-26 12:27:47.995648] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:26:54.774 [2024-07-26 12:27:47.995718] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.032 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.032 [2024-07-26 12:27:48.059153] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:55.032 [2024-07-26 12:27:48.168268] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.032 [2024-07-26 12:27:48.168322] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.032 [2024-07-26 12:27:48.168360] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.032 [2024-07-26 12:27:48.168372] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.032 [2024-07-26 12:27:48.168382] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.032 [2024-07-26 12:27:48.168514] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.032 [2024-07-26 12:27:48.168579] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.032 [2024-07-26 12:27:48.168645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.032 [2024-07-26 12:27:48.168647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # return 0 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@730 -- # xtrace_disable 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:88:00.0 ]] 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:88:00.0 ]] 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:88:00.0 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:88:00.0 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:26:55.289 12:27:48 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:26:55.289 ************************************ 00:26:55.289 START TEST spdk_target_abort 00:26:55.289 ************************************ 00:26:55.289 12:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1125 -- # spdk_target 00:26:55.289 12:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:26:55.289 12:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:88:00.0 -b spdk_target 00:26:55.289 12:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:55.289 12:27:48 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.564 spdk_targetn1 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.565 [2024-07-26 12:27:51.204520] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:26:58.565 [2024-07-26 12:27:51.236785] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:26:58.565 12:27:51 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:26:58.565 EAL: No free 2048 kB hugepages reported on node 1 00:27:01.875 Initializing NVMe Controllers 00:27:01.875 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:01.875 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:01.875 Initialization complete. Launching workers. 00:27:01.875 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 10314, failed: 0 00:27:01.875 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1250, failed to submit 9064 00:27:01.875 success 782, unsuccess 468, failed 0 00:27:01.875 12:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:01.875 12:27:54 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:01.875 EAL: No free 2048 kB hugepages reported on node 1 00:27:05.149 Initializing NVMe Controllers 00:27:05.149 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:05.150 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:05.150 Initialization complete. Launching workers. 00:27:05.150 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8575, failed: 0 00:27:05.150 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1251, failed to submit 7324 00:27:05.150 success 324, unsuccess 927, failed 0 00:27:05.150 12:27:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:05.150 12:27:57 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:05.150 EAL: No free 2048 kB hugepages reported on node 1 00:27:07.675 Initializing NVMe Controllers 00:27:07.675 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:27:07.675 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:07.675 Initialization complete. Launching workers. 00:27:07.675 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 31302, failed: 0 00:27:07.675 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2722, failed to submit 28580 00:27:07.675 success 552, unsuccess 2170, failed 0 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:07.675 12:28:00 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 2999864 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@950 -- # '[' -z 2999864 ']' 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # kill -0 2999864 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # uname 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 2999864 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@968 -- # echo 'killing process with pid 2999864' 00:27:09.048 killing process with pid 2999864 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@969 -- # kill 2999864 00:27:09.048 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@974 -- # wait 2999864 00:27:09.307 00:27:09.307 real 0m14.161s 00:27:09.307 user 0m53.692s 00:27:09.307 sys 0m2.513s 00:27:09.307 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:09.307 12:28:02 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:09.307 ************************************ 00:27:09.307 END TEST spdk_target_abort 00:27:09.307 ************************************ 00:27:09.307 12:28:02 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:27:09.307 12:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:09.307 12:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:09.307 12:28:02 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:09.565 ************************************ 00:27:09.565 START TEST kernel_target_abort 00:27:09.565 ************************************ 00:27:09.565 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1125 -- # kernel_target 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:27:09.566 12:28:02 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:10.500 Waiting for block devices as requested 00:27:10.500 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:10.760 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:10.760 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:11.018 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:11.018 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:11.018 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:11.018 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:11.277 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:11.277 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:11.277 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:11.277 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:11.535 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:11.535 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:11.535 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:11.535 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:11.793 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:11.793 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:27:11.793 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:27:12.050 No valid GPT data, bailing 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 --hostid=5b23e107-7094-e311-b1cb-001e67a97d55 -a 10.0.0.1 -t tcp -s 4420 00:27:12.050 00:27:12.050 Discovery Log Number of Records 2, Generation counter 2 00:27:12.050 =====Discovery Log Entry 0====== 00:27:12.050 trtype: tcp 00:27:12.050 adrfam: ipv4 00:27:12.050 subtype: current discovery subsystem 00:27:12.050 treq: not specified, sq flow control disable supported 00:27:12.050 portid: 1 00:27:12.050 trsvcid: 4420 00:27:12.050 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:27:12.050 traddr: 10.0.0.1 00:27:12.050 eflags: none 00:27:12.050 sectype: none 00:27:12.050 =====Discovery Log Entry 1====== 00:27:12.050 trtype: tcp 00:27:12.050 adrfam: ipv4 00:27:12.050 subtype: nvme subsystem 00:27:12.050 treq: not specified, sq flow control disable supported 00:27:12.050 portid: 1 00:27:12.050 trsvcid: 4420 00:27:12.050 subnqn: nqn.2016-06.io.spdk:testnqn 00:27:12.050 traddr: 10.0.0.1 00:27:12.050 eflags: none 00:27:12.050 sectype: none 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:12.050 12:28:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:12.051 EAL: No free 2048 kB hugepages reported on node 1 00:27:15.331 Initializing NVMe Controllers 00:27:15.331 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:15.331 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:15.331 Initialization complete. Launching workers. 00:27:15.331 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 32112, failed: 0 00:27:15.331 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 32112, failed to submit 0 00:27:15.331 success 0, unsuccess 32112, failed 0 00:27:15.331 12:28:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:15.331 12:28:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:15.331 EAL: No free 2048 kB hugepages reported on node 1 00:27:18.616 Initializing NVMe Controllers 00:27:18.616 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:18.616 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:18.616 Initialization complete. Launching workers. 00:27:18.616 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 66963, failed: 0 00:27:18.616 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 16906, failed to submit 50057 00:27:18.616 success 0, unsuccess 16906, failed 0 00:27:18.616 12:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:27:18.616 12:28:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:27:18.616 EAL: No free 2048 kB hugepages reported on node 1 00:27:21.901 Initializing NVMe Controllers 00:27:21.901 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:27:21.901 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:27:21.901 Initialization complete. Launching workers. 00:27:21.901 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 62063, failed: 0 00:27:21.901 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 15490, failed to submit 46573 00:27:21.901 success 0, unsuccess 15490, failed 0 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:27:21.901 12:28:14 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:27:22.467 0000:00:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:22.467 0000:00:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:22.467 0000:00:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:22.467 0000:00:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:22.726 0000:00:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:22.726 0000:00:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:22.726 0000:00:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:22.726 0000:00:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.7 (8086 0e27): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.6 (8086 0e26): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.5 (8086 0e25): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.4 (8086 0e24): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.3 (8086 0e23): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.2 (8086 0e22): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.1 (8086 0e21): ioatdma -> vfio-pci 00:27:22.726 0000:80:04.0 (8086 0e20): ioatdma -> vfio-pci 00:27:23.664 0000:88:00.0 (8086 0a54): nvme -> vfio-pci 00:27:23.664 00:27:23.664 real 0m14.272s 00:27:23.664 user 0m5.224s 00:27:23.664 sys 0m3.359s 00:27:23.664 12:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:23.664 12:28:16 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:27:23.664 ************************************ 00:27:23.664 END TEST kernel_target_abort 00:27:23.664 ************************************ 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:27:23.664 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:23.665 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:23.665 rmmod nvme_tcp 00:27:23.665 rmmod nvme_fabrics 00:27:23.665 rmmod nvme_keyring 00:27:23.665 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 2999864 ']' 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 2999864 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@950 -- # '[' -z 2999864 ']' 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # kill -0 2999864 00:27:23.922 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 954: kill: (2999864) - No such process 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- common/autotest_common.sh@977 -- # echo 'Process with pid 2999864 is not found' 00:27:23.922 Process with pid 2999864 is not found 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:27:23.922 12:28:16 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:27:24.858 Waiting for block devices as requested 00:27:25.116 0000:88:00.0 (8086 0a54): vfio-pci -> nvme 00:27:25.116 0000:00:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:25.116 0000:00:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:25.374 0000:00:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:25.374 0000:00:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:25.374 0000:00:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:25.631 0000:00:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:25.631 0000:00:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:25.631 0000:00:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:25.631 0000:80:04.7 (8086 0e27): vfio-pci -> ioatdma 00:27:25.631 0000:80:04.6 (8086 0e26): vfio-pci -> ioatdma 00:27:25.888 0000:80:04.5 (8086 0e25): vfio-pci -> ioatdma 00:27:25.888 0000:80:04.4 (8086 0e24): vfio-pci -> ioatdma 00:27:25.888 0000:80:04.3 (8086 0e23): vfio-pci -> ioatdma 00:27:25.888 0000:80:04.2 (8086 0e22): vfio-pci -> ioatdma 00:27:26.147 0000:80:04.1 (8086 0e21): vfio-pci -> ioatdma 00:27:26.147 0000:80:04.0 (8086 0e20): vfio-pci -> ioatdma 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:26.147 12:28:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:28.687 12:28:21 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:28.687 00:27:28.687 real 0m37.659s 00:27:28.687 user 1m1.006s 00:27:28.687 sys 0m9.081s 00:27:28.687 12:28:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:28.687 12:28:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:27:28.687 ************************************ 00:27:28.687 END TEST nvmf_abort_qd_sizes 00:27:28.687 ************************************ 00:27:28.687 12:28:21 -- spdk/autotest.sh@299 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:28.687 12:28:21 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:28.687 12:28:21 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:28.687 12:28:21 -- common/autotest_common.sh@10 -- # set +x 00:27:28.687 ************************************ 00:27:28.687 START TEST keyring_file 00:27:28.687 ************************************ 00:27:28.687 12:28:21 keyring_file -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:27:28.687 * Looking for test storage... 00:27:28.687 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:28.687 12:28:21 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:28.687 12:28:21 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:28.687 12:28:21 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:28.687 12:28:21 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.687 12:28:21 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.687 12:28:21 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.687 12:28:21 keyring_file -- paths/export.sh@5 -- # export PATH 00:27:28.687 12:28:21 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@47 -- # : 0 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:27:28.687 12:28:21 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.C1i429Qoe1 00:27:28.687 12:28:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:28.687 12:28:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.C1i429Qoe1 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.C1i429Qoe1 00:27:28.688 12:28:21 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.C1i429Qoe1 00:27:28.688 12:28:21 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@17 -- # name=key1 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.MtwuhNxco0 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:28.688 12:28:21 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.MtwuhNxco0 00:27:28.688 12:28:21 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.MtwuhNxco0 00:27:28.688 12:28:21 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.MtwuhNxco0 00:27:28.688 12:28:21 keyring_file -- keyring/file.sh@30 -- # tgtpid=3005622 00:27:28.688 12:28:21 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:28.688 12:28:21 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3005622 00:27:28.688 12:28:21 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3005622 ']' 00:27:28.688 12:28:21 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:28.688 12:28:21 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.688 12:28:21 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:28.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:28.688 12:28:21 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.688 12:28:21 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.688 [2024-07-26 12:28:21.663847] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:27:28.688 [2024-07-26 12:28:21.663935] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005622 ] 00:27:28.688 EAL: No free 2048 kB hugepages reported on node 1 00:27:28.688 [2024-07-26 12:28:21.721609] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:28.688 [2024-07-26 12:28:21.833807] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:28.946 12:28:22 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.946 [2024-07-26 12:28:22.103759] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:28.946 null0 00:27:28.946 [2024-07-26 12:28:22.135840] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:28.946 [2024-07-26 12:28:22.136343] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:28.946 [2024-07-26 12:28:22.143840] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:28.946 12:28:22 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:28.946 12:28:22 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@638 -- # local arg=rpc_cmd 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@642 -- # type -t rpc_cmd 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@653 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:28.947 [2024-07-26 12:28:22.155875] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:27:28.947 request: 00:27:28.947 { 00:27:28.947 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:27:28.947 "secure_channel": false, 00:27:28.947 "listen_address": { 00:27:28.947 "trtype": "tcp", 00:27:28.947 "traddr": "127.0.0.1", 00:27:28.947 "trsvcid": "4420" 00:27:28.947 }, 00:27:28.947 "method": "nvmf_subsystem_add_listener", 00:27:28.947 "req_id": 1 00:27:28.947 } 00:27:28.947 Got JSON-RPC error response 00:27:28.947 response: 00:27:28.947 { 00:27:28.947 "code": -32602, 00:27:28.947 "message": "Invalid parameters" 00:27:28.947 } 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@589 -- # [[ 1 == 0 ]] 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:28.947 12:28:22 keyring_file -- keyring/file.sh@46 -- # bperfpid=3005639 00:27:28.947 12:28:22 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:27:28.947 12:28:22 keyring_file -- keyring/file.sh@48 -- # waitforlisten 3005639 /var/tmp/bperf.sock 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3005639 ']' 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:28.947 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:28.947 12:28:22 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:29.206 [2024-07-26 12:28:22.207390] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:27:29.206 [2024-07-26 12:28:22.207473] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3005639 ] 00:27:29.206 EAL: No free 2048 kB hugepages reported on node 1 00:27:29.206 [2024-07-26 12:28:22.272362] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:29.206 [2024-07-26 12:28:22.388662] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:29.463 12:28:22 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:29.463 12:28:22 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:29.463 12:28:22 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:29.463 12:28:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:29.721 12:28:22 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MtwuhNxco0 00:27:29.721 12:28:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MtwuhNxco0 00:27:29.979 12:28:22 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:27:29.979 12:28:22 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:27:29.979 12:28:22 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:29.979 12:28:22 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:29.979 12:28:22 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.236 12:28:23 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.C1i429Qoe1 == \/\t\m\p\/\t\m\p\.\C\1\i\4\2\9\Q\o\e\1 ]] 00:27:30.236 12:28:23 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:27:30.236 12:28:23 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.236 12:28:23 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.MtwuhNxco0 == \/\t\m\p\/\t\m\p\.\M\t\w\u\h\N\x\c\o\0 ]] 00:27:30.236 12:28:23 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.236 12:28:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:30.494 12:28:23 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:27:30.494 12:28:23 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:27:30.494 12:28:23 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:30.494 12:28:23 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:30.494 12:28:23 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:30.494 12:28:23 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:30.494 12:28:23 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:30.752 12:28:24 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:27:30.752 12:28:24 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:30.752 12:28:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:31.010 [2024-07-26 12:28:24.234762] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:31.267 nvme0n1 00:27:31.267 12:28:24 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:27:31.267 12:28:24 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:31.267 12:28:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.267 12:28:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.267 12:28:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.267 12:28:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:31.525 12:28:24 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:27:31.525 12:28:24 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:27:31.525 12:28:24 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:31.525 12:28:24 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:31.525 12:28:24 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:31.525 12:28:24 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:31.525 12:28:24 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:31.783 12:28:24 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:27:31.783 12:28:24 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:31.783 Running I/O for 1 seconds... 00:27:32.718 00:27:32.718 Latency(us) 00:27:32.718 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:32.718 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:27:32.718 nvme0n1 : 1.02 4493.64 17.55 0.00 0.00 28150.39 7281.78 31457.28 00:27:32.718 =================================================================================================================== 00:27:32.718 Total : 4493.64 17.55 0.00 0.00 28150.39 7281.78 31457.28 00:27:32.718 0 00:27:32.718 12:28:25 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:32.718 12:28:25 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:32.976 12:28:26 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:27:32.976 12:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:33.234 12:28:26 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:27:33.234 12:28:26 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.234 12:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:33.492 12:28:26 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:27:33.492 12:28:26 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:33.492 12:28:26 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.492 12:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:27:33.750 [2024-07-26 12:28:26.946797] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:33.750 [2024-07-26 12:28:26.947218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd849a0 (107): Transport endpoint is not connected 00:27:33.750 [2024-07-26 12:28:26.948210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xd849a0 (9): Bad file descriptor 00:27:33.750 [2024-07-26 12:28:26.949208] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:33.750 [2024-07-26 12:28:26.949228] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:33.750 [2024-07-26 12:28:26.949241] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:33.750 request: 00:27:33.750 { 00:27:33.750 "name": "nvme0", 00:27:33.750 "trtype": "tcp", 00:27:33.750 "traddr": "127.0.0.1", 00:27:33.750 "adrfam": "ipv4", 00:27:33.750 "trsvcid": "4420", 00:27:33.750 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:33.750 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:33.750 "prchk_reftag": false, 00:27:33.750 "prchk_guard": false, 00:27:33.750 "hdgst": false, 00:27:33.750 "ddgst": false, 00:27:33.750 "psk": "key1", 00:27:33.750 "method": "bdev_nvme_attach_controller", 00:27:33.750 "req_id": 1 00:27:33.750 } 00:27:33.750 Got JSON-RPC error response 00:27:33.750 response: 00:27:33.750 { 00:27:33.750 "code": -5, 00:27:33.750 "message": "Input/output error" 00:27:33.750 } 00:27:33.750 12:28:26 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:33.750 12:28:26 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:33.750 12:28:26 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:33.750 12:28:26 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:33.750 12:28:26 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:27:33.750 12:28:26 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:33.750 12:28:26 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:33.750 12:28:26 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:33.750 12:28:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:33.750 12:28:26 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:34.007 12:28:27 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:27:34.007 12:28:27 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:27:34.007 12:28:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:34.007 12:28:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:34.007 12:28:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:34.007 12:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:34.007 12:28:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:34.264 12:28:27 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:27:34.264 12:28:27 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:27:34.264 12:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:34.522 12:28:27 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:27:34.522 12:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:27:34.780 12:28:27 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:27:34.780 12:28:27 keyring_file -- keyring/file.sh@77 -- # jq length 00:27:34.780 12:28:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.037 12:28:28 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:27:35.037 12:28:28 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.C1i429Qoe1 00:27:35.037 12:28:28 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.037 12:28:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:35.037 12:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:35.295 [2024-07-26 12:28:28.447089] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.C1i429Qoe1': 0100660 00:27:35.295 [2024-07-26 12:28:28.447140] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:27:35.295 request: 00:27:35.295 { 00:27:35.295 "name": "key0", 00:27:35.295 "path": "/tmp/tmp.C1i429Qoe1", 00:27:35.295 "method": "keyring_file_add_key", 00:27:35.295 "req_id": 1 00:27:35.295 } 00:27:35.295 Got JSON-RPC error response 00:27:35.295 response: 00:27:35.295 { 00:27:35.295 "code": -1, 00:27:35.295 "message": "Operation not permitted" 00:27:35.295 } 00:27:35.295 12:28:28 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:35.295 12:28:28 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:35.295 12:28:28 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:35.295 12:28:28 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:35.295 12:28:28 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.C1i429Qoe1 00:27:35.295 12:28:28 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:35.295 12:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.C1i429Qoe1 00:27:35.553 12:28:28 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.C1i429Qoe1 00:27:35.553 12:28:28 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:27:35.553 12:28:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:35.553 12:28:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:35.553 12:28:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:35.553 12:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:35.553 12:28:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:35.812 12:28:28 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:27:35.812 12:28:28 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.812 12:28:28 keyring_file -- common/autotest_common.sh@650 -- # local es=0 00:27:35.812 12:28:28 keyring_file -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.812 12:28:28 keyring_file -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:35.812 12:28:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.812 12:28:28 keyring_file -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:35.813 12:28:28 keyring_file -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:35.813 12:28:28 keyring_file -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:35.813 12:28:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.071 [2024-07-26 12:28:29.201197] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.C1i429Qoe1': No such file or directory 00:27:36.071 [2024-07-26 12:28:29.201238] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:27:36.071 [2024-07-26 12:28:29.201266] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:27:36.071 [2024-07-26 12:28:29.201278] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:36.071 [2024-07-26 12:28:29.201289] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:27:36.071 request: 00:27:36.071 { 00:27:36.071 "name": "nvme0", 00:27:36.071 "trtype": "tcp", 00:27:36.071 "traddr": "127.0.0.1", 00:27:36.071 "adrfam": "ipv4", 00:27:36.071 "trsvcid": "4420", 00:27:36.071 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:36.071 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:36.071 "prchk_reftag": false, 00:27:36.071 "prchk_guard": false, 00:27:36.071 "hdgst": false, 00:27:36.071 "ddgst": false, 00:27:36.071 "psk": "key0", 00:27:36.071 "method": "bdev_nvme_attach_controller", 00:27:36.071 "req_id": 1 00:27:36.071 } 00:27:36.071 Got JSON-RPC error response 00:27:36.071 response: 00:27:36.071 { 00:27:36.071 "code": -19, 00:27:36.071 "message": "No such device" 00:27:36.071 } 00:27:36.071 12:28:29 keyring_file -- common/autotest_common.sh@653 -- # es=1 00:27:36.071 12:28:29 keyring_file -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:36.071 12:28:29 keyring_file -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:36.071 12:28:29 keyring_file -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:36.071 12:28:29 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:27:36.071 12:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:36.329 12:28:29 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@17 -- # name=key0 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@17 -- # digest=0 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@18 -- # mktemp 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.LvRRVb7ET1 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:36.329 12:28:29 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:36.329 12:28:29 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:27:36.329 12:28:29 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:36.329 12:28:29 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:36.329 12:28:29 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:27:36.329 12:28:29 keyring_file -- nvmf/common.sh@705 -- # python - 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.LvRRVb7ET1 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.LvRRVb7ET1 00:27:36.329 12:28:29 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.LvRRVb7ET1 00:27:36.329 12:28:29 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LvRRVb7ET1 00:27:36.329 12:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LvRRVb7ET1 00:27:36.586 12:28:29 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.586 12:28:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:36.844 nvme0n1 00:27:36.844 12:28:30 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:27:36.844 12:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:36.844 12:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:36.844 12:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:36.844 12:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:36.844 12:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.102 12:28:30 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:27:37.102 12:28:30 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:27:37.102 12:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:27:37.359 12:28:30 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:27:37.359 12:28:30 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:27:37.359 12:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.359 12:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.360 12:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.617 12:28:30 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:27:37.617 12:28:30 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:27:37.617 12:28:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:37.617 12:28:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:37.617 12:28:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:37.617 12:28:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:37.617 12:28:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:37.874 12:28:31 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:27:37.874 12:28:31 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:37.874 12:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:38.132 12:28:31 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:27:38.132 12:28:31 keyring_file -- keyring/file.sh@104 -- # jq length 00:27:38.132 12:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:38.389 12:28:31 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:27:38.389 12:28:31 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.LvRRVb7ET1 00:27:38.389 12:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.LvRRVb7ET1 00:27:38.647 12:28:31 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.MtwuhNxco0 00:27:38.647 12:28:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.MtwuhNxco0 00:27:38.905 12:28:32 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:38.905 12:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:27:39.163 nvme0n1 00:27:39.163 12:28:32 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:27:39.163 12:28:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:27:39.731 12:28:32 keyring_file -- keyring/file.sh@112 -- # config='{ 00:27:39.731 "subsystems": [ 00:27:39.731 { 00:27:39.731 "subsystem": "keyring", 00:27:39.731 "config": [ 00:27:39.731 { 00:27:39.731 "method": "keyring_file_add_key", 00:27:39.731 "params": { 00:27:39.731 "name": "key0", 00:27:39.731 "path": "/tmp/tmp.LvRRVb7ET1" 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "keyring_file_add_key", 00:27:39.732 "params": { 00:27:39.732 "name": "key1", 00:27:39.732 "path": "/tmp/tmp.MtwuhNxco0" 00:27:39.732 } 00:27:39.732 } 00:27:39.732 ] 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "subsystem": "iobuf", 00:27:39.732 "config": [ 00:27:39.732 { 00:27:39.732 "method": "iobuf_set_options", 00:27:39.732 "params": { 00:27:39.732 "small_pool_count": 8192, 00:27:39.732 "large_pool_count": 1024, 00:27:39.732 "small_bufsize": 8192, 00:27:39.732 "large_bufsize": 135168 00:27:39.732 } 00:27:39.732 } 00:27:39.732 ] 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "subsystem": "sock", 00:27:39.732 "config": [ 00:27:39.732 { 00:27:39.732 "method": "sock_set_default_impl", 00:27:39.732 "params": { 00:27:39.732 "impl_name": "posix" 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "sock_impl_set_options", 00:27:39.732 "params": { 00:27:39.732 "impl_name": "ssl", 00:27:39.732 "recv_buf_size": 4096, 00:27:39.732 "send_buf_size": 4096, 00:27:39.732 "enable_recv_pipe": true, 00:27:39.732 "enable_quickack": false, 00:27:39.732 "enable_placement_id": 0, 00:27:39.732 "enable_zerocopy_send_server": true, 00:27:39.732 "enable_zerocopy_send_client": false, 00:27:39.732 "zerocopy_threshold": 0, 00:27:39.732 "tls_version": 0, 00:27:39.732 "enable_ktls": false 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "sock_impl_set_options", 00:27:39.732 "params": { 00:27:39.732 "impl_name": "posix", 00:27:39.732 "recv_buf_size": 2097152, 00:27:39.732 "send_buf_size": 2097152, 00:27:39.732 "enable_recv_pipe": true, 00:27:39.732 "enable_quickack": false, 00:27:39.732 "enable_placement_id": 0, 00:27:39.732 "enable_zerocopy_send_server": true, 00:27:39.732 "enable_zerocopy_send_client": false, 00:27:39.732 "zerocopy_threshold": 0, 00:27:39.732 "tls_version": 0, 00:27:39.732 "enable_ktls": false 00:27:39.732 } 00:27:39.732 } 00:27:39.732 ] 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "subsystem": "vmd", 00:27:39.732 "config": [] 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "subsystem": "accel", 00:27:39.732 "config": [ 00:27:39.732 { 00:27:39.732 "method": "accel_set_options", 00:27:39.732 "params": { 00:27:39.732 "small_cache_size": 128, 00:27:39.732 "large_cache_size": 16, 00:27:39.732 "task_count": 2048, 00:27:39.732 "sequence_count": 2048, 00:27:39.732 "buf_count": 2048 00:27:39.732 } 00:27:39.732 } 00:27:39.732 ] 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "subsystem": "bdev", 00:27:39.732 "config": [ 00:27:39.732 { 00:27:39.732 "method": "bdev_set_options", 00:27:39.732 "params": { 00:27:39.732 "bdev_io_pool_size": 65535, 00:27:39.732 "bdev_io_cache_size": 256, 00:27:39.732 "bdev_auto_examine": true, 00:27:39.732 "iobuf_small_cache_size": 128, 00:27:39.732 "iobuf_large_cache_size": 16 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "bdev_raid_set_options", 00:27:39.732 "params": { 00:27:39.732 "process_window_size_kb": 1024, 00:27:39.732 "process_max_bandwidth_mb_sec": 0 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "bdev_iscsi_set_options", 00:27:39.732 "params": { 00:27:39.732 "timeout_sec": 30 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "bdev_nvme_set_options", 00:27:39.732 "params": { 00:27:39.732 "action_on_timeout": "none", 00:27:39.732 "timeout_us": 0, 00:27:39.732 "timeout_admin_us": 0, 00:27:39.732 "keep_alive_timeout_ms": 10000, 00:27:39.732 "arbitration_burst": 0, 00:27:39.732 "low_priority_weight": 0, 00:27:39.732 "medium_priority_weight": 0, 00:27:39.732 "high_priority_weight": 0, 00:27:39.732 "nvme_adminq_poll_period_us": 10000, 00:27:39.732 "nvme_ioq_poll_period_us": 0, 00:27:39.732 "io_queue_requests": 512, 00:27:39.732 "delay_cmd_submit": true, 00:27:39.732 "transport_retry_count": 4, 00:27:39.732 "bdev_retry_count": 3, 00:27:39.732 "transport_ack_timeout": 0, 00:27:39.732 "ctrlr_loss_timeout_sec": 0, 00:27:39.732 "reconnect_delay_sec": 0, 00:27:39.732 "fast_io_fail_timeout_sec": 0, 00:27:39.732 "disable_auto_failback": false, 00:27:39.732 "generate_uuids": false, 00:27:39.732 "transport_tos": 0, 00:27:39.732 "nvme_error_stat": false, 00:27:39.732 "rdma_srq_size": 0, 00:27:39.732 "io_path_stat": false, 00:27:39.732 "allow_accel_sequence": false, 00:27:39.732 "rdma_max_cq_size": 0, 00:27:39.732 "rdma_cm_event_timeout_ms": 0, 00:27:39.732 "dhchap_digests": [ 00:27:39.732 "sha256", 00:27:39.732 "sha384", 00:27:39.732 "sha512" 00:27:39.732 ], 00:27:39.732 "dhchap_dhgroups": [ 00:27:39.732 "null", 00:27:39.732 "ffdhe2048", 00:27:39.732 "ffdhe3072", 00:27:39.732 "ffdhe4096", 00:27:39.732 "ffdhe6144", 00:27:39.732 "ffdhe8192" 00:27:39.732 ] 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "bdev_nvme_attach_controller", 00:27:39.732 "params": { 00:27:39.732 "name": "nvme0", 00:27:39.732 "trtype": "TCP", 00:27:39.732 "adrfam": "IPv4", 00:27:39.732 "traddr": "127.0.0.1", 00:27:39.732 "trsvcid": "4420", 00:27:39.732 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.732 "prchk_reftag": false, 00:27:39.732 "prchk_guard": false, 00:27:39.732 "ctrlr_loss_timeout_sec": 0, 00:27:39.732 "reconnect_delay_sec": 0, 00:27:39.732 "fast_io_fail_timeout_sec": 0, 00:27:39.732 "psk": "key0", 00:27:39.732 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.732 "hdgst": false, 00:27:39.732 "ddgst": false 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "bdev_nvme_set_hotplug", 00:27:39.732 "params": { 00:27:39.732 "period_us": 100000, 00:27:39.732 "enable": false 00:27:39.732 } 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "method": "bdev_wait_for_examine" 00:27:39.732 } 00:27:39.732 ] 00:27:39.732 }, 00:27:39.732 { 00:27:39.732 "subsystem": "nbd", 00:27:39.732 "config": [] 00:27:39.732 } 00:27:39.732 ] 00:27:39.732 }' 00:27:39.732 12:28:32 keyring_file -- keyring/file.sh@114 -- # killprocess 3005639 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3005639 ']' 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3005639 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3005639 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3005639' 00:27:39.732 killing process with pid 3005639 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@969 -- # kill 3005639 00:27:39.732 Received shutdown signal, test time was about 1.000000 seconds 00:27:39.732 00:27:39.732 Latency(us) 00:27:39.732 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:39.732 =================================================================================================================== 00:27:39.732 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:39.732 12:28:32 keyring_file -- common/autotest_common.sh@974 -- # wait 3005639 00:27:39.993 12:28:33 keyring_file -- keyring/file.sh@117 -- # bperfpid=3007091 00:27:39.993 12:28:33 keyring_file -- keyring/file.sh@119 -- # waitforlisten 3007091 /var/tmp/bperf.sock 00:27:39.993 12:28:33 keyring_file -- common/autotest_common.sh@831 -- # '[' -z 3007091 ']' 00:27:39.993 12:28:33 keyring_file -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:39.993 12:28:33 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:27:39.993 12:28:33 keyring_file -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:39.993 12:28:33 keyring_file -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:39.993 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:39.993 12:28:33 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:27:39.993 "subsystems": [ 00:27:39.993 { 00:27:39.993 "subsystem": "keyring", 00:27:39.993 "config": [ 00:27:39.993 { 00:27:39.993 "method": "keyring_file_add_key", 00:27:39.993 "params": { 00:27:39.993 "name": "key0", 00:27:39.993 "path": "/tmp/tmp.LvRRVb7ET1" 00:27:39.993 } 00:27:39.993 }, 00:27:39.993 { 00:27:39.993 "method": "keyring_file_add_key", 00:27:39.993 "params": { 00:27:39.993 "name": "key1", 00:27:39.994 "path": "/tmp/tmp.MtwuhNxco0" 00:27:39.994 } 00:27:39.994 } 00:27:39.994 ] 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "subsystem": "iobuf", 00:27:39.994 "config": [ 00:27:39.994 { 00:27:39.994 "method": "iobuf_set_options", 00:27:39.994 "params": { 00:27:39.994 "small_pool_count": 8192, 00:27:39.994 "large_pool_count": 1024, 00:27:39.994 "small_bufsize": 8192, 00:27:39.994 "large_bufsize": 135168 00:27:39.994 } 00:27:39.994 } 00:27:39.994 ] 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "subsystem": "sock", 00:27:39.994 "config": [ 00:27:39.994 { 00:27:39.994 "method": "sock_set_default_impl", 00:27:39.994 "params": { 00:27:39.994 "impl_name": "posix" 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "sock_impl_set_options", 00:27:39.994 "params": { 00:27:39.994 "impl_name": "ssl", 00:27:39.994 "recv_buf_size": 4096, 00:27:39.994 "send_buf_size": 4096, 00:27:39.994 "enable_recv_pipe": true, 00:27:39.994 "enable_quickack": false, 00:27:39.994 "enable_placement_id": 0, 00:27:39.994 "enable_zerocopy_send_server": true, 00:27:39.994 "enable_zerocopy_send_client": false, 00:27:39.994 "zerocopy_threshold": 0, 00:27:39.994 "tls_version": 0, 00:27:39.994 "enable_ktls": false 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "sock_impl_set_options", 00:27:39.994 "params": { 00:27:39.994 "impl_name": "posix", 00:27:39.994 "recv_buf_size": 2097152, 00:27:39.994 "send_buf_size": 2097152, 00:27:39.994 "enable_recv_pipe": true, 00:27:39.994 "enable_quickack": false, 00:27:39.994 "enable_placement_id": 0, 00:27:39.994 "enable_zerocopy_send_server": true, 00:27:39.994 "enable_zerocopy_send_client": false, 00:27:39.994 "zerocopy_threshold": 0, 00:27:39.994 "tls_version": 0, 00:27:39.994 "enable_ktls": false 00:27:39.994 } 00:27:39.994 } 00:27:39.994 ] 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "subsystem": "vmd", 00:27:39.994 "config": [] 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "subsystem": "accel", 00:27:39.994 "config": [ 00:27:39.994 { 00:27:39.994 "method": "accel_set_options", 00:27:39.994 "params": { 00:27:39.994 "small_cache_size": 128, 00:27:39.994 "large_cache_size": 16, 00:27:39.994 "task_count": 2048, 00:27:39.994 "sequence_count": 2048, 00:27:39.994 "buf_count": 2048 00:27:39.994 } 00:27:39.994 } 00:27:39.994 ] 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "subsystem": "bdev", 00:27:39.994 "config": [ 00:27:39.994 { 00:27:39.994 "method": "bdev_set_options", 00:27:39.994 "params": { 00:27:39.994 "bdev_io_pool_size": 65535, 00:27:39.994 "bdev_io_cache_size": 256, 00:27:39.994 "bdev_auto_examine": true, 00:27:39.994 "iobuf_small_cache_size": 128, 00:27:39.994 "iobuf_large_cache_size": 16 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "bdev_raid_set_options", 00:27:39.994 "params": { 00:27:39.994 "process_window_size_kb": 1024, 00:27:39.994 "process_max_bandwidth_mb_sec": 0 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "bdev_iscsi_set_options", 00:27:39.994 "params": { 00:27:39.994 "timeout_sec": 30 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "bdev_nvme_set_options", 00:27:39.994 "params": { 00:27:39.994 "action_on_timeout": "none", 00:27:39.994 "timeout_us": 0, 00:27:39.994 "timeout_admin_us": 0, 00:27:39.994 "keep_alive_timeout_ms": 10000, 00:27:39.994 "arbitration_burst": 0, 00:27:39.994 "low_priority_weight": 0, 00:27:39.994 "medium_priority_weight": 0, 00:27:39.994 "high_priority_weight": 0, 00:27:39.994 "nvme_adminq_poll_period_us": 10000, 00:27:39.994 "nvme_ioq_poll_period_us": 0, 00:27:39.994 "io_queue_requests": 512, 00:27:39.994 "delay_cmd_submit": true, 00:27:39.994 "transport_retry_count": 4, 00:27:39.994 "bdev_retry_count": 3, 00:27:39.994 "transport_ack_timeout": 0, 00:27:39.994 "ctrlr_loss_timeout_sec": 0, 00:27:39.994 "reconnect_delay_sec": 0, 00:27:39.994 "fast_io_fail_timeout_sec": 0, 00:27:39.994 "disable_auto_failback": false, 00:27:39.994 "generate_uuids": false, 00:27:39.994 "transport_tos": 0, 00:27:39.994 "nvme_error_stat": false, 00:27:39.994 "rdma_srq_size": 0, 00:27:39.994 "io_path_stat": false, 00:27:39.994 "allow_accel_sequence": false, 00:27:39.994 "rdma_max_cq_size": 0, 00:27:39.994 "rdma_cm_event_timeout_ms": 0, 00:27:39.994 "dhchap_digests": [ 00:27:39.994 "sha256", 00:27:39.994 "sha384", 00:27:39.994 "sha512" 00:27:39.994 ], 00:27:39.994 "dhchap_dhgroups": [ 00:27:39.994 "null", 00:27:39.994 "ffdhe2048", 00:27:39.994 "ffdhe3072", 00:27:39.994 "ffdhe4096", 00:27:39.994 "ffdhe6144", 00:27:39.994 "ffdhe8192" 00:27:39.994 ] 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "bdev_nvme_attach_controller", 00:27:39.994 "params": { 00:27:39.994 "name": "nvme0", 00:27:39.994 "trtype": "TCP", 00:27:39.994 "adrfam": "IPv4", 00:27:39.994 "traddr": "127.0.0.1", 00:27:39.994 "trsvcid": "4420", 00:27:39.994 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:39.994 "prchk_reftag": false, 00:27:39.994 "prchk_guard": false, 00:27:39.994 "ctrlr_loss_timeout_sec": 0, 00:27:39.994 "reconnect_delay_sec": 0, 00:27:39.994 "fast_io_fail_timeout_sec": 0, 00:27:39.994 "psk": "key0", 00:27:39.994 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:39.994 "hdgst": false, 00:27:39.994 "ddgst": false 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "bdev_nvme_set_hotplug", 00:27:39.994 "params": { 00:27:39.994 "period_us": 100000, 00:27:39.994 "enable": false 00:27:39.994 } 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "method": "bdev_wait_for_examine" 00:27:39.994 } 00:27:39.994 ] 00:27:39.994 }, 00:27:39.994 { 00:27:39.994 "subsystem": "nbd", 00:27:39.994 "config": [] 00:27:39.995 } 00:27:39.995 ] 00:27:39.995 }' 00:27:39.995 12:28:33 keyring_file -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:39.995 12:28:33 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:39.995 [2024-07-26 12:28:33.044972] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:27:39.995 [2024-07-26 12:28:33.045080] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007091 ] 00:27:39.995 EAL: No free 2048 kB hugepages reported on node 1 00:27:39.995 [2024-07-26 12:28:33.102580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.995 [2024-07-26 12:28:33.215845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:40.254 [2024-07-26 12:28:33.409722] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:40.819 12:28:33 keyring_file -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:40.819 12:28:33 keyring_file -- common/autotest_common.sh@864 -- # return 0 00:27:40.819 12:28:33 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:27:40.819 12:28:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:40.819 12:28:33 keyring_file -- keyring/file.sh@120 -- # jq length 00:27:41.077 12:28:34 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:27:41.077 12:28:34 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:27:41.077 12:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:27:41.077 12:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.077 12:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.077 12:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.077 12:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:27:41.336 12:28:34 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:27:41.336 12:28:34 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:27:41.336 12:28:34 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:27:41.336 12:28:34 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:27:41.336 12:28:34 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:41.336 12:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:41.336 12:28:34 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:27:41.594 12:28:34 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:27:41.594 12:28:34 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:27:41.594 12:28:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:27:41.594 12:28:34 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:27:41.858 12:28:34 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:27:41.858 12:28:34 keyring_file -- keyring/file.sh@1 -- # cleanup 00:27:41.858 12:28:34 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.LvRRVb7ET1 /tmp/tmp.MtwuhNxco0 00:27:41.858 12:28:34 keyring_file -- keyring/file.sh@20 -- # killprocess 3007091 00:27:41.858 12:28:34 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3007091 ']' 00:27:41.858 12:28:34 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3007091 00:27:41.858 12:28:34 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:41.858 12:28:34 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:41.858 12:28:34 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3007091 00:27:41.858 12:28:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:41.858 12:28:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:41.858 12:28:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3007091' 00:27:41.858 killing process with pid 3007091 00:27:41.858 12:28:35 keyring_file -- common/autotest_common.sh@969 -- # kill 3007091 00:27:41.858 Received shutdown signal, test time was about 1.000000 seconds 00:27:41.858 00:27:41.858 Latency(us) 00:27:41.858 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:41.858 =================================================================================================================== 00:27:41.858 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:27:41.858 12:28:35 keyring_file -- common/autotest_common.sh@974 -- # wait 3007091 00:27:42.156 12:28:35 keyring_file -- keyring/file.sh@21 -- # killprocess 3005622 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@950 -- # '[' -z 3005622 ']' 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@954 -- # kill -0 3005622 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@955 -- # uname 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3005622 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3005622' 00:27:42.156 killing process with pid 3005622 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@969 -- # kill 3005622 00:27:42.156 [2024-07-26 12:28:35.309220] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:27:42.156 12:28:35 keyring_file -- common/autotest_common.sh@974 -- # wait 3005622 00:27:42.729 00:27:42.729 real 0m14.334s 00:27:42.729 user 0m35.311s 00:27:42.729 sys 0m3.217s 00:27:42.729 12:28:35 keyring_file -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:42.729 12:28:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:27:42.729 ************************************ 00:27:42.729 END TEST keyring_file 00:27:42.729 ************************************ 00:27:42.729 12:28:35 -- spdk/autotest.sh@300 -- # [[ y == y ]] 00:27:42.729 12:28:35 -- spdk/autotest.sh@301 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:42.729 12:28:35 -- common/autotest_common.sh@1101 -- # '[' 2 -le 1 ']' 00:27:42.729 12:28:35 -- common/autotest_common.sh@1107 -- # xtrace_disable 00:27:42.729 12:28:35 -- common/autotest_common.sh@10 -- # set +x 00:27:42.729 ************************************ 00:27:42.729 START TEST keyring_linux 00:27:42.729 ************************************ 00:27:42.729 12:28:35 keyring_linux -- common/autotest_common.sh@1125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:27:42.729 * Looking for test storage... 00:27:42.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:5b23e107-7094-e311-b1cb-001e67a97d55 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=5b23e107-7094-e311-b1cb-001e67a97d55 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:42.729 12:28:35 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:42.729 12:28:35 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:42.729 12:28:35 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:42.729 12:28:35 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.729 12:28:35 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.729 12:28:35 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.729 12:28:35 keyring_linux -- paths/export.sh@5 -- # export PATH 00:27:42.729 12:28:35 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:27:42.729 12:28:35 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:27:42.729 12:28:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:27:42.729 12:28:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:27:42.730 /tmp/:spdk-test:key0 00:27:42.730 12:28:35 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:27:42.730 12:28:35 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:27:42.730 12:28:35 keyring_linux -- nvmf/common.sh@705 -- # python - 00:27:42.988 12:28:35 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:27:42.988 12:28:35 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:27:42.988 /tmp/:spdk-test:key1 00:27:42.988 12:28:35 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3007457 00:27:42.988 12:28:35 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:27:42.988 12:28:35 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3007457 00:27:42.988 12:28:35 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3007457 ']' 00:27:42.988 12:28:35 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:42.988 12:28:35 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:42.988 12:28:35 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:42.988 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:42.988 12:28:35 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:42.988 12:28:35 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:42.988 [2024-07-26 12:28:36.049573] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:27:42.988 [2024-07-26 12:28:36.049667] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007457 ] 00:27:42.988 EAL: No free 2048 kB hugepages reported on node 1 00:27:42.988 [2024-07-26 12:28:36.108622] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:42.988 [2024-07-26 12:28:36.227561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:43.247 12:28:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.247 12:28:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:43.247 12:28:36 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:27:43.247 12:28:36 keyring_linux -- common/autotest_common.sh@561 -- # xtrace_disable 00:27:43.247 12:28:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.247 [2024-07-26 12:28:36.492492] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:43.505 null0 00:27:43.505 [2024-07-26 12:28:36.524540] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:27:43.505 [2024-07-26 12:28:36.525003] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@589 -- # [[ 0 == 0 ]] 00:27:43.505 12:28:36 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:27:43.505 1024937909 00:27:43.505 12:28:36 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:27:43.505 118581170 00:27:43.505 12:28:36 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3007591 00:27:43.505 12:28:36 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3007591 /var/tmp/bperf.sock 00:27:43.505 12:28:36 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@831 -- # '[' -z 3007591 ']' 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@835 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@836 -- # local max_retries=100 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@838 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:43.505 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@840 -- # xtrace_disable 00:27:43.505 12:28:36 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:43.505 [2024-07-26 12:28:36.596235] Starting SPDK v24.09-pre git sha1 fb47d9517 / DPDK 24.03.0 initialization... 00:27:43.505 [2024-07-26 12:28:36.596309] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3007591 ] 00:27:43.505 EAL: No free 2048 kB hugepages reported on node 1 00:27:43.505 [2024-07-26 12:28:36.653534] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:43.764 [2024-07-26 12:28:36.765197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:43.764 12:28:36 keyring_linux -- common/autotest_common.sh@860 -- # (( i == 0 )) 00:27:43.764 12:28:36 keyring_linux -- common/autotest_common.sh@864 -- # return 0 00:27:43.764 12:28:36 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:27:43.764 12:28:36 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:27:44.021 12:28:37 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:27:44.021 12:28:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:27:44.278 12:28:37 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:44.278 12:28:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:27:44.535 [2024-07-26 12:28:37.634747] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:27:44.535 nvme0n1 00:27:44.535 12:28:37 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:27:44.535 12:28:37 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:27:44.535 12:28:37 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:44.535 12:28:37 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:44.535 12:28:37 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:44.535 12:28:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.793 12:28:37 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:27:44.793 12:28:37 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:44.793 12:28:37 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:27:44.793 12:28:37 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:27:44.793 12:28:37 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:27:44.793 12:28:37 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:44.793 12:28:37 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@25 -- # sn=1024937909 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@26 -- # [[ 1024937909 == \1\0\2\4\9\3\7\9\0\9 ]] 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 1024937909 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:27:45.051 12:28:38 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:45.308 Running I/O for 1 seconds... 00:27:46.243 00:27:46.243 Latency(us) 00:27:46.243 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:46.243 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:46.243 nvme0n1 : 1.02 4395.77 17.17 0.00 0.00 28836.72 9951.76 40777.96 00:27:46.243 =================================================================================================================== 00:27:46.243 Total : 4395.77 17.17 0.00 0.00 28836.72 9951.76 40777.96 00:27:46.243 0 00:27:46.243 12:28:39 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:27:46.243 12:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:27:46.500 12:28:39 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:27:46.500 12:28:39 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:27:46.500 12:28:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:27:46.500 12:28:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:27:46.500 12:28:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:27:46.500 12:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:27:46.757 12:28:39 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:27:46.757 12:28:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:27:46.757 12:28:39 keyring_linux -- keyring/linux.sh@23 -- # return 00:27:46.757 12:28:39 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:46.757 12:28:39 keyring_linux -- common/autotest_common.sh@650 -- # local es=0 00:27:46.758 12:28:39 keyring_linux -- common/autotest_common.sh@652 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:46.758 12:28:39 keyring_linux -- common/autotest_common.sh@638 -- # local arg=bperf_cmd 00:27:46.758 12:28:39 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.758 12:28:39 keyring_linux -- common/autotest_common.sh@642 -- # type -t bperf_cmd 00:27:46.758 12:28:39 keyring_linux -- common/autotest_common.sh@642 -- # case "$(type -t "$arg")" in 00:27:46.758 12:28:39 keyring_linux -- common/autotest_common.sh@653 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:46.758 12:28:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:27:47.017 [2024-07-26 12:28:40.137861] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:27:47.017 [2024-07-26 12:28:40.138638] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196a890 (107): Transport endpoint is not connected 00:27:47.017 [2024-07-26 12:28:40.139627] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x196a890 (9): Bad file descriptor 00:27:47.017 [2024-07-26 12:28:40.140626] nvme_ctrlr.c:4168:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:27:47.017 [2024-07-26 12:28:40.140651] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:27:47.018 [2024-07-26 12:28:40.140668] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:27:47.018 request: 00:27:47.018 { 00:27:47.018 "name": "nvme0", 00:27:47.018 "trtype": "tcp", 00:27:47.018 "traddr": "127.0.0.1", 00:27:47.018 "adrfam": "ipv4", 00:27:47.018 "trsvcid": "4420", 00:27:47.018 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:27:47.018 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:27:47.018 "prchk_reftag": false, 00:27:47.018 "prchk_guard": false, 00:27:47.018 "hdgst": false, 00:27:47.018 "ddgst": false, 00:27:47.018 "psk": ":spdk-test:key1", 00:27:47.018 "method": "bdev_nvme_attach_controller", 00:27:47.018 "req_id": 1 00:27:47.018 } 00:27:47.018 Got JSON-RPC error response 00:27:47.018 response: 00:27:47.018 { 00:27:47.018 "code": -5, 00:27:47.018 "message": "Input/output error" 00:27:47.018 } 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@653 -- # es=1 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@661 -- # (( es > 128 )) 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@672 -- # [[ -n '' ]] 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@677 -- # (( !es == 0 )) 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@33 -- # sn=1024937909 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 1024937909 00:27:47.018 1 links removed 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@33 -- # sn=118581170 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 118581170 00:27:47.018 1 links removed 00:27:47.018 12:28:40 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3007591 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3007591 ']' 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3007591 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3007591 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_1 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_1 = sudo ']' 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3007591' 00:27:47.018 killing process with pid 3007591 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 3007591 00:27:47.018 Received shutdown signal, test time was about 1.000000 seconds 00:27:47.018 00:27:47.018 Latency(us) 00:27:47.018 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:47.018 =================================================================================================================== 00:27:47.018 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:47.018 12:28:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 3007591 00:27:47.276 12:28:40 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3007457 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@950 -- # '[' -z 3007457 ']' 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@954 -- # kill -0 3007457 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@955 -- # uname 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@955 -- # '[' Linux = Linux ']' 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@956 -- # ps --no-headers -o comm= 3007457 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@956 -- # process_name=reactor_0 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@960 -- # '[' reactor_0 = sudo ']' 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@968 -- # echo 'killing process with pid 3007457' 00:27:47.276 killing process with pid 3007457 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@969 -- # kill 3007457 00:27:47.276 12:28:40 keyring_linux -- common/autotest_common.sh@974 -- # wait 3007457 00:27:47.843 00:27:47.843 real 0m5.092s 00:27:47.843 user 0m9.531s 00:27:47.843 sys 0m1.542s 00:27:47.843 12:28:40 keyring_linux -- common/autotest_common.sh@1126 -- # xtrace_disable 00:27:47.843 12:28:40 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:27:47.843 ************************************ 00:27:47.843 END TEST keyring_linux 00:27:47.843 ************************************ 00:27:47.843 12:28:40 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@320 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@325 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@334 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@351 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@360 -- # '[' 0 -eq 1 ']' 00:27:47.843 12:28:40 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:27:47.843 12:28:40 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:27:47.843 12:28:40 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:27:47.843 12:28:40 -- spdk/autotest.sh@379 -- # [[ 0 -eq 1 ]] 00:27:47.843 12:28:40 -- spdk/autotest.sh@384 -- # trap - SIGINT SIGTERM EXIT 00:27:47.843 12:28:40 -- spdk/autotest.sh@386 -- # timing_enter post_cleanup 00:27:47.843 12:28:40 -- common/autotest_common.sh@724 -- # xtrace_disable 00:27:47.843 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:27:47.843 12:28:40 -- spdk/autotest.sh@387 -- # autotest_cleanup 00:27:47.843 12:28:40 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:27:47.843 12:28:40 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:27:47.843 12:28:40 -- common/autotest_common.sh@10 -- # set +x 00:27:49.743 INFO: APP EXITING 00:27:49.743 INFO: killing all VMs 00:27:49.743 INFO: killing vhost app 00:27:49.743 INFO: EXIT DONE 00:27:50.678 0000:88:00.0 (8086 0a54): Already using the nvme driver 00:27:50.678 0000:00:04.7 (8086 0e27): Already using the ioatdma driver 00:27:50.678 0000:00:04.6 (8086 0e26): Already using the ioatdma driver 00:27:50.678 0000:00:04.5 (8086 0e25): Already using the ioatdma driver 00:27:50.678 0000:00:04.4 (8086 0e24): Already using the ioatdma driver 00:27:50.678 0000:00:04.3 (8086 0e23): Already using the ioatdma driver 00:27:50.678 0000:00:04.2 (8086 0e22): Already using the ioatdma driver 00:27:50.678 0000:00:04.1 (8086 0e21): Already using the ioatdma driver 00:27:50.678 0000:00:04.0 (8086 0e20): Already using the ioatdma driver 00:27:50.936 0000:80:04.7 (8086 0e27): Already using the ioatdma driver 00:27:50.936 0000:80:04.6 (8086 0e26): Already using the ioatdma driver 00:27:50.936 0000:80:04.5 (8086 0e25): Already using the ioatdma driver 00:27:50.936 0000:80:04.4 (8086 0e24): Already using the ioatdma driver 00:27:50.936 0000:80:04.3 (8086 0e23): Already using the ioatdma driver 00:27:50.936 0000:80:04.2 (8086 0e22): Already using the ioatdma driver 00:27:50.936 0000:80:04.1 (8086 0e21): Already using the ioatdma driver 00:27:50.936 0000:80:04.0 (8086 0e20): Already using the ioatdma driver 00:27:52.310 Cleaning 00:27:52.310 Removing: /var/run/dpdk/spdk0/config 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:27:52.310 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:27:52.310 Removing: /var/run/dpdk/spdk0/hugepage_info 00:27:52.310 Removing: /var/run/dpdk/spdk1/config 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:27:52.310 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:27:52.310 Removing: /var/run/dpdk/spdk1/hugepage_info 00:27:52.310 Removing: /var/run/dpdk/spdk1/mp_socket 00:27:52.310 Removing: /var/run/dpdk/spdk2/config 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:27:52.310 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:27:52.310 Removing: /var/run/dpdk/spdk2/hugepage_info 00:27:52.310 Removing: /var/run/dpdk/spdk3/config 00:27:52.310 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:27:52.311 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:27:52.311 Removing: /var/run/dpdk/spdk3/hugepage_info 00:27:52.311 Removing: /var/run/dpdk/spdk4/config 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:27:52.311 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:27:52.311 Removing: /var/run/dpdk/spdk4/hugepage_info 00:27:52.311 Removing: /dev/shm/bdev_svc_trace.1 00:27:52.311 Removing: /dev/shm/nvmf_trace.0 00:27:52.311 Removing: /dev/shm/spdk_tgt_trace.pid2750843 00:27:52.311 Removing: /var/run/dpdk/spdk0 00:27:52.311 Removing: /var/run/dpdk/spdk1 00:27:52.311 Removing: /var/run/dpdk/spdk2 00:27:52.311 Removing: /var/run/dpdk/spdk3 00:27:52.311 Removing: /var/run/dpdk/spdk4 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2749166 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2749919 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2750843 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2751276 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2751964 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2752114 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2752825 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2752841 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2753083 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2754280 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2755315 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2755627 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2755822 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2756029 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2756221 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2756391 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2756649 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2756829 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2757056 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2759609 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2759779 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2759945 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2760020 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2760843 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2761006 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2761327 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2761414 00:27:52.311 Removing: /var/run/dpdk/spdk_pid2761621 00:27:52.570 Removing: /var/run/dpdk/spdk_pid2761753 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2761923 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2762055 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2762434 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2762589 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2762895 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2764979 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2767606 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2774458 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2774883 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2777517 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2777685 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2780318 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2784036 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2786228 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2792761 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2798607 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2799919 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2800583 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2810947 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2813228 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2839684 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2843029 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2846949 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2850920 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2850923 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2851576 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2852117 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2852778 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2853173 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2853179 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2853436 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2853455 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2853572 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2854114 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2854765 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2855427 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2855829 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2855831 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2855983 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2856979 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2857700 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2863042 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2888677 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2892035 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2893217 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2894535 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2894661 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2894691 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2894833 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2895267 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2896579 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2897320 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2897671 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2899367 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2899677 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2900233 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2902752 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2908660 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2911426 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2915195 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2916140 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2917236 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2919856 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2922635 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2927008 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2927011 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2929900 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2930042 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2930181 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2930568 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2930573 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2933215 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2933658 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2936200 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2938176 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2941612 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2945327 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2951554 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2955904 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2955918 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2968866 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2969398 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2969815 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2970340 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2970918 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2971338 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2971861 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2972275 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2974767 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2974912 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2978710 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2978878 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2980487 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2985523 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2985528 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2988440 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2989842 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2991354 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2992641 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2994008 00:27:52.571 Removing: /var/run/dpdk/spdk_pid2994892 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3000231 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3000551 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3000948 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3002508 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3002902 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3003183 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3005622 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3005639 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3007091 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3007457 00:27:52.571 Removing: /var/run/dpdk/spdk_pid3007591 00:27:52.830 Clean 00:27:52.830 12:28:45 -- common/autotest_common.sh@1451 -- # return 0 00:27:52.830 12:28:45 -- spdk/autotest.sh@388 -- # timing_exit post_cleanup 00:27:52.830 12:28:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.830 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:27:52.830 12:28:45 -- spdk/autotest.sh@390 -- # timing_exit autotest 00:27:52.830 12:28:45 -- common/autotest_common.sh@730 -- # xtrace_disable 00:27:52.830 12:28:45 -- common/autotest_common.sh@10 -- # set +x 00:27:52.830 12:28:45 -- spdk/autotest.sh@391 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:27:52.830 12:28:45 -- spdk/autotest.sh@393 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:27:52.830 12:28:45 -- spdk/autotest.sh@393 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:27:52.830 12:28:45 -- spdk/autotest.sh@395 -- # hash lcov 00:27:52.830 12:28:45 -- spdk/autotest.sh@395 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:27:52.830 12:28:45 -- spdk/autotest.sh@397 -- # hostname 00:27:52.830 12:28:45 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-gp-11 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:27:53.088 geninfo: WARNING: invalid characters removed from testname! 00:28:25.191 12:29:14 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:25.191 12:29:18 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:28.484 12:29:21 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:31.022 12:29:23 -- spdk/autotest.sh@401 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:34.315 12:29:26 -- spdk/autotest.sh@402 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:36.853 12:29:29 -- spdk/autotest.sh@403 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:28:40.149 12:29:32 -- spdk/autotest.sh@404 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:28:40.149 12:29:32 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:40.149 12:29:32 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:28:40.149 12:29:32 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:40.149 12:29:32 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:40.149 12:29:32 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.149 12:29:32 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.149 12:29:32 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.149 12:29:32 -- paths/export.sh@5 -- $ export PATH 00:28:40.149 12:29:32 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:40.149 12:29:32 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:28:40.149 12:29:32 -- common/autobuild_common.sh@447 -- $ date +%s 00:28:40.149 12:29:32 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721989772.XXXXXX 00:28:40.149 12:29:32 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721989772.uJkhQI 00:28:40.149 12:29:32 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:28:40.149 12:29:32 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:28:40.149 12:29:32 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:28:40.149 12:29:32 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:28:40.149 12:29:32 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:28:40.149 12:29:32 -- common/autobuild_common.sh@463 -- $ get_config_params 00:28:40.149 12:29:32 -- common/autotest_common.sh@398 -- $ xtrace_disable 00:28:40.149 12:29:32 -- common/autotest_common.sh@10 -- $ set +x 00:28:40.149 12:29:32 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:28:40.149 12:29:32 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:28:40.149 12:29:32 -- pm/common@17 -- $ local monitor 00:28:40.149 12:29:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.149 12:29:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.149 12:29:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.149 12:29:32 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.149 12:29:32 -- pm/common@21 -- $ date +%s 00:28:40.149 12:29:32 -- pm/common@21 -- $ date +%s 00:28:40.149 12:29:32 -- pm/common@25 -- $ sleep 1 00:28:40.149 12:29:32 -- pm/common@21 -- $ date +%s 00:28:40.149 12:29:32 -- pm/common@21 -- $ date +%s 00:28:40.149 12:29:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721989772 00:28:40.149 12:29:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721989772 00:28:40.149 12:29:32 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721989772 00:28:40.149 12:29:32 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721989772 00:28:40.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721989772_collect-vmstat.pm.log 00:28:40.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721989772_collect-cpu-load.pm.log 00:28:40.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721989772_collect-cpu-temp.pm.log 00:28:40.149 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721989772_collect-bmc-pm.bmc.pm.log 00:28:40.733 12:29:33 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:28:40.733 12:29:33 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j48 00:28:40.733 12:29:33 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:40.733 12:29:33 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:28:40.733 12:29:33 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:28:40.733 12:29:33 -- spdk/autopackage.sh@19 -- $ timing_finish 00:28:40.733 12:29:33 -- common/autotest_common.sh@736 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:28:40.733 12:29:33 -- common/autotest_common.sh@737 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:28:40.733 12:29:33 -- common/autotest_common.sh@739 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:28:40.733 12:29:33 -- spdk/autopackage.sh@20 -- $ exit 0 00:28:40.733 12:29:33 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:28:40.733 12:29:33 -- pm/common@29 -- $ signal_monitor_resources TERM 00:28:40.733 12:29:33 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:28:40.733 12:29:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.733 12:29:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:28:40.733 12:29:33 -- pm/common@44 -- $ pid=3017142 00:28:40.733 12:29:33 -- pm/common@50 -- $ kill -TERM 3017142 00:28:40.733 12:29:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.733 12:29:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:28:40.733 12:29:33 -- pm/common@44 -- $ pid=3017144 00:28:40.733 12:29:33 -- pm/common@50 -- $ kill -TERM 3017144 00:28:40.733 12:29:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.733 12:29:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:28:40.733 12:29:33 -- pm/common@44 -- $ pid=3017146 00:28:40.733 12:29:33 -- pm/common@50 -- $ kill -TERM 3017146 00:28:40.733 12:29:33 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:28:40.733 12:29:33 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:28:40.733 12:29:33 -- pm/common@44 -- $ pid=3017175 00:28:40.733 12:29:33 -- pm/common@50 -- $ sudo -E kill -TERM 3017175 00:28:40.733 + [[ -n 2665460 ]] 00:28:40.733 + sudo kill 2665460 00:28:40.755 [Pipeline] } 00:28:40.782 [Pipeline] // stage 00:28:40.789 [Pipeline] } 00:28:40.800 [Pipeline] // timeout 00:28:40.803 [Pipeline] } 00:28:40.813 [Pipeline] // catchError 00:28:40.816 [Pipeline] } 00:28:40.825 [Pipeline] // wrap 00:28:40.829 [Pipeline] } 00:28:40.838 [Pipeline] // catchError 00:28:40.843 [Pipeline] stage 00:28:40.845 [Pipeline] { (Epilogue) 00:28:40.854 [Pipeline] catchError 00:28:40.855 [Pipeline] { 00:28:40.864 [Pipeline] echo 00:28:40.865 Cleanup processes 00:28:40.869 [Pipeline] sh 00:28:41.151 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.151 3017282 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:28:41.151 3017410 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.164 [Pipeline] sh 00:28:41.478 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:28:41.478 ++ awk '{print $1}' 00:28:41.478 ++ grep -v 'sudo pgrep' 00:28:41.478 + sudo kill -9 3017282 00:28:41.489 [Pipeline] sh 00:28:41.775 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:28:49.892 [Pipeline] sh 00:28:50.178 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:28:50.178 Artifacts sizes are good 00:28:50.191 [Pipeline] archiveArtifacts 00:28:50.197 Archiving artifacts 00:28:50.411 [Pipeline] sh 00:28:50.693 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:28:50.709 [Pipeline] cleanWs 00:28:50.720 [WS-CLEANUP] Deleting project workspace... 00:28:50.720 [WS-CLEANUP] Deferred wipeout is used... 00:28:50.727 [WS-CLEANUP] done 00:28:50.729 [Pipeline] } 00:28:50.748 [Pipeline] // catchError 00:28:50.761 [Pipeline] sh 00:28:51.042 + logger -p user.info -t JENKINS-CI 00:28:51.051 [Pipeline] } 00:28:51.067 [Pipeline] // stage 00:28:51.073 [Pipeline] } 00:28:51.090 [Pipeline] // node 00:28:51.096 [Pipeline] End of Pipeline 00:28:51.132 Finished: SUCCESS